00:00:00.002 Started by upstream project "autotest-nightly-lts" build number 1913 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3174 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.041 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.042 The recommended git tool is: git 00:00:00.042 using credential 00000000-0000-0000-0000-000000000002 00:00:00.044 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.064 Fetching changes from the remote Git repository 00:00:00.068 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.097 Using shallow fetch with depth 1 00:00:00.097 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.097 > git --version # timeout=10 00:00:00.133 > git --version # 'git version 2.39.2' 00:00:00.133 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.173 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.174 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.216 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.228 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.240 Checking out Revision bdda68d1e41499f94b336830106e36e3602574f3 (FETCH_HEAD) 00:00:03.240 > git config core.sparsecheckout # timeout=10 00:00:03.251 > git read-tree -mu HEAD # timeout=10 00:00:03.267 > git checkout -f bdda68d1e41499f94b336830106e36e3602574f3 # timeout=5 00:00:03.289 Commit message: "jenkins/jjb-config: Make sure proxies are set for pkgdep jobs" 00:00:03.289 > git rev-list --no-walk d763a45cd581fc315bd89c929406ef8de2500459 # timeout=10 00:00:03.372 [Pipeline] Start of Pipeline 00:00:03.385 [Pipeline] library 00:00:03.387 Loading library shm_lib@master 00:00:03.387 Library shm_lib@master is cached. Copying from home. 00:00:03.402 [Pipeline] node 00:00:03.410 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.412 [Pipeline] { 00:00:03.421 [Pipeline] catchError 00:00:03.422 [Pipeline] { 00:00:03.434 [Pipeline] wrap 00:00:03.443 [Pipeline] { 00:00:03.451 [Pipeline] stage 00:00:03.453 [Pipeline] { (Prologue) 00:00:03.634 [Pipeline] sh 00:00:03.924 + logger -p user.info -t JENKINS-CI 00:00:03.939 [Pipeline] echo 00:00:03.940 Node: CYP12 00:00:03.946 [Pipeline] sh 00:00:04.249 [Pipeline] setCustomBuildProperty 00:00:04.257 [Pipeline] echo 00:00:04.258 Cleanup processes 00:00:04.261 [Pipeline] sh 00:00:04.547 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.547 729882 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.559 [Pipeline] sh 00:00:04.845 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.846 ++ grep -v 'sudo pgrep' 00:00:04.846 ++ awk '{print $1}' 00:00:04.846 + sudo kill -9 00:00:04.846 + true 00:00:04.862 [Pipeline] cleanWs 00:00:04.872 [WS-CLEANUP] Deleting project workspace... 00:00:04.872 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.880 [WS-CLEANUP] done 00:00:04.886 [Pipeline] setCustomBuildProperty 00:00:04.901 [Pipeline] sh 00:00:05.189 + sudo git config --global --replace-all safe.directory '*' 00:00:05.245 [Pipeline] nodesByLabel 00:00:05.247 Found a total of 2 nodes with the 'sorcerer' label 00:00:05.255 [Pipeline] httpRequest 00:00:05.259 HttpMethod: GET 00:00:05.260 URL: http://10.211.164.101/packages/jbp_bdda68d1e41499f94b336830106e36e3602574f3.tar.gz 00:00:05.268 Sending request to url: http://10.211.164.101/packages/jbp_bdda68d1e41499f94b336830106e36e3602574f3.tar.gz 00:00:05.272 Response Code: HTTP/1.1 200 OK 00:00:05.272 Success: Status code 200 is in the accepted range: 200,404 00:00:05.273 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bdda68d1e41499f94b336830106e36e3602574f3.tar.gz 00:00:06.204 [Pipeline] sh 00:00:06.490 + tar --no-same-owner -xf jbp_bdda68d1e41499f94b336830106e36e3602574f3.tar.gz 00:00:06.506 [Pipeline] httpRequest 00:00:06.510 HttpMethod: GET 00:00:06.510 URL: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:06.511 Sending request to url: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:06.516 Response Code: HTTP/1.1 200 OK 00:00:06.516 Success: Status code 200 is in the accepted range: 200,404 00:00:06.517 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:01:07.383 [Pipeline] sh 00:01:07.675 + tar --no-same-owner -xf spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:01:10.989 [Pipeline] sh 00:01:11.286 + git -C spdk log --oneline -n5 00:01:11.286 130b9406a test/nvmf: replace rpc_cmd() with direct invocation of rpc.py due to inherently larger timeout 00:01:11.286 5d3fd6726 bdev: Fix a race bug between unregistration and QoS poller 00:01:11.286 fbc673ece test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:01:11.286 3651466d0 test/scheduler: Calculate median of the cpu load samples 00:01:11.286 a7414547f test/scheduler: Make sure stderr is not O_TRUNCated in move_proc() 00:01:11.299 [Pipeline] } 00:01:11.315 [Pipeline] // stage 00:01:11.323 [Pipeline] stage 00:01:11.325 [Pipeline] { (Prepare) 00:01:11.342 [Pipeline] writeFile 00:01:11.358 [Pipeline] sh 00:01:11.645 + logger -p user.info -t JENKINS-CI 00:01:11.658 [Pipeline] sh 00:01:11.943 + logger -p user.info -t JENKINS-CI 00:01:11.956 [Pipeline] sh 00:01:12.241 + cat autorun-spdk.conf 00:01:12.242 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.242 SPDK_TEST_NVMF=1 00:01:12.242 SPDK_TEST_NVME_CLI=1 00:01:12.242 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.242 SPDK_TEST_NVMF_NICS=e810 00:01:12.242 SPDK_RUN_UBSAN=1 00:01:12.242 NET_TYPE=phy 00:01:12.250 RUN_NIGHTLY=1 00:01:12.254 [Pipeline] readFile 00:01:12.276 [Pipeline] withEnv 00:01:12.278 [Pipeline] { 00:01:12.291 [Pipeline] sh 00:01:12.581 + set -ex 00:01:12.581 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:12.581 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:12.581 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.581 ++ SPDK_TEST_NVMF=1 00:01:12.581 ++ SPDK_TEST_NVME_CLI=1 00:01:12.581 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.581 ++ SPDK_TEST_NVMF_NICS=e810 00:01:12.581 ++ SPDK_RUN_UBSAN=1 00:01:12.581 ++ NET_TYPE=phy 00:01:12.581 ++ RUN_NIGHTLY=1 00:01:12.581 + case $SPDK_TEST_NVMF_NICS in 00:01:12.581 + DRIVERS=ice 00:01:12.581 + [[ tcp == \r\d\m\a ]] 00:01:12.581 + [[ -n ice ]] 00:01:12.581 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:12.581 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:12.581 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:12.581 rmmod: ERROR: Module irdma is not currently loaded 00:01:12.581 rmmod: ERROR: Module i40iw is not currently loaded 00:01:12.581 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:12.581 + true 00:01:12.581 + for D in $DRIVERS 00:01:12.581 + sudo modprobe ice 00:01:12.581 + exit 0 00:01:12.592 [Pipeline] } 00:01:12.609 [Pipeline] // withEnv 00:01:12.614 [Pipeline] } 00:01:12.631 [Pipeline] // stage 00:01:12.640 [Pipeline] catchError 00:01:12.642 [Pipeline] { 00:01:12.657 [Pipeline] timeout 00:01:12.657 Timeout set to expire in 50 min 00:01:12.658 [Pipeline] { 00:01:12.669 [Pipeline] stage 00:01:12.671 [Pipeline] { (Tests) 00:01:12.683 [Pipeline] sh 00:01:12.973 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.973 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.973 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.973 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:12.973 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.973 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:12.973 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:12.973 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:12.973 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:12.973 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:12.973 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:12.973 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.973 + source /etc/os-release 00:01:12.973 ++ NAME='Fedora Linux' 00:01:12.973 ++ VERSION='38 (Cloud Edition)' 00:01:12.973 ++ ID=fedora 00:01:12.973 ++ VERSION_ID=38 00:01:12.973 ++ VERSION_CODENAME= 00:01:12.973 ++ PLATFORM_ID=platform:f38 00:01:12.973 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:12.973 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:12.973 ++ LOGO=fedora-logo-icon 00:01:12.973 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:12.973 ++ HOME_URL=https://fedoraproject.org/ 00:01:12.973 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:12.973 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:12.973 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:12.973 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:12.973 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:12.973 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:12.973 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:12.973 ++ SUPPORT_END=2024-05-14 00:01:12.973 ++ VARIANT='Cloud Edition' 00:01:12.973 ++ VARIANT_ID=cloud 00:01:12.973 + uname -a 00:01:12.973 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:12.973 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:16.278 Hugepages 00:01:16.278 node hugesize free / total 00:01:16.278 node0 1048576kB 0 / 0 00:01:16.278 node0 2048kB 0 / 0 00:01:16.278 node1 1048576kB 0 / 0 00:01:16.278 node1 2048kB 0 / 0 00:01:16.278 00:01:16.278 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:16.278 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:16.278 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:16.278 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:16.278 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:16.278 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:16.278 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:16.278 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:16.279 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:16.279 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:16.279 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:16.279 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:16.279 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:16.279 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:16.279 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:16.279 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:16.279 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:16.279 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:16.279 + rm -f /tmp/spdk-ld-path 00:01:16.279 + source autorun-spdk.conf 00:01:16.279 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.279 ++ SPDK_TEST_NVMF=1 00:01:16.279 ++ SPDK_TEST_NVME_CLI=1 00:01:16.279 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.279 ++ SPDK_TEST_NVMF_NICS=e810 00:01:16.279 ++ SPDK_RUN_UBSAN=1 00:01:16.279 ++ NET_TYPE=phy 00:01:16.279 ++ RUN_NIGHTLY=1 00:01:16.279 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:16.279 + [[ -n '' ]] 00:01:16.279 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.279 + for M in /var/spdk/build-*-manifest.txt 00:01:16.279 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:16.279 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:16.279 + for M in /var/spdk/build-*-manifest.txt 00:01:16.279 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:16.279 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:16.279 ++ uname 00:01:16.279 + [[ Linux == \L\i\n\u\x ]] 00:01:16.279 + sudo dmesg -T 00:01:16.279 + sudo dmesg --clear 00:01:16.279 + dmesg_pid=730867 00:01:16.279 + [[ Fedora Linux == FreeBSD ]] 00:01:16.279 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.279 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.279 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:16.279 + [[ -x /usr/src/fio-static/fio ]] 00:01:16.279 + export FIO_BIN=/usr/src/fio-static/fio 00:01:16.279 + FIO_BIN=/usr/src/fio-static/fio 00:01:16.279 + sudo dmesg -Tw 00:01:16.279 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:16.279 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:16.279 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:16.279 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.279 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.279 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:16.279 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.279 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.279 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:16.279 Test configuration: 00:01:16.279 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.279 SPDK_TEST_NVMF=1 00:01:16.279 SPDK_TEST_NVME_CLI=1 00:01:16.279 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.279 SPDK_TEST_NVMF_NICS=e810 00:01:16.279 SPDK_RUN_UBSAN=1 00:01:16.279 NET_TYPE=phy 00:01:16.279 RUN_NIGHTLY=1 07:53:46 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:16.279 07:53:46 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:16.279 07:53:46 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:16.279 07:53:46 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:16.279 07:53:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.279 07:53:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.279 07:53:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.279 07:53:46 -- paths/export.sh@5 -- $ export PATH 00:01:16.279 07:53:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.279 07:53:46 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:16.279 07:53:46 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:16.279 07:53:46 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1718085226.XXXXXX 00:01:16.279 07:53:46 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1718085226.ai2dqJ 00:01:16.279 07:53:46 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:16.279 07:53:46 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:16.279 07:53:46 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:16.279 07:53:46 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:16.279 07:53:46 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:16.279 07:53:46 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:16.279 07:53:46 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:16.279 07:53:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.279 07:53:46 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:16.279 07:53:46 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:16.279 07:53:46 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:16.279 07:53:46 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.279 07:53:46 -- spdk/autobuild.sh@16 -- $ date -u 00:01:16.279 Tue Jun 11 05:53:46 AM UTC 2024 00:01:16.279 07:53:46 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:16.279 LTS-43-g130b9406a 00:01:16.279 07:53:46 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:16.279 07:53:46 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:16.279 07:53:46 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:16.279 07:53:46 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:16.279 07:53:46 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:16.279 07:53:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.279 ************************************ 00:01:16.279 START TEST ubsan 00:01:16.279 ************************************ 00:01:16.279 07:53:46 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:16.279 using ubsan 00:01:16.279 00:01:16.279 real 0m0.000s 00:01:16.279 user 0m0.000s 00:01:16.279 sys 0m0.000s 00:01:16.279 07:53:46 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:16.279 07:53:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.279 ************************************ 00:01:16.279 END TEST ubsan 00:01:16.279 ************************************ 00:01:16.279 07:53:46 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:16.279 07:53:46 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:16.279 07:53:46 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:16.279 07:53:46 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:16.279 07:53:46 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:16.279 07:53:46 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:16.279 07:53:46 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:16.279 07:53:46 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:16.279 07:53:46 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:16.279 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:16.279 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:16.852 Using 'verbs' RDMA provider 00:01:29.665 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:44.582 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:44.582 Creating mk/config.mk...done. 00:01:44.582 Creating mk/cc.flags.mk...done. 00:01:44.582 Type 'make' to build. 00:01:44.582 07:54:13 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:44.582 07:54:13 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:44.582 07:54:13 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:44.582 07:54:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.582 ************************************ 00:01:44.582 START TEST make 00:01:44.582 ************************************ 00:01:44.582 07:54:13 -- common/autotest_common.sh@1104 -- $ make -j144 00:01:44.582 make[1]: Nothing to be done for 'all'. 00:01:52.724 The Meson build system 00:01:52.724 Version: 1.3.1 00:01:52.724 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:52.724 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:52.724 Build type: native build 00:01:52.724 Program cat found: YES (/usr/bin/cat) 00:01:52.724 Project name: DPDK 00:01:52.724 Project version: 23.11.0 00:01:52.725 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:52.725 C linker for the host machine: cc ld.bfd 2.39-16 00:01:52.725 Host machine cpu family: x86_64 00:01:52.725 Host machine cpu: x86_64 00:01:52.725 Message: ## Building in Developer Mode ## 00:01:52.725 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:52.725 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:52.725 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:52.725 Program python3 found: YES (/usr/bin/python3) 00:01:52.725 Program cat found: YES (/usr/bin/cat) 00:01:52.725 Compiler for C supports arguments -march=native: YES 00:01:52.725 Checking for size of "void *" : 8 00:01:52.725 Checking for size of "void *" : 8 (cached) 00:01:52.725 Library m found: YES 00:01:52.725 Library numa found: YES 00:01:52.725 Has header "numaif.h" : YES 00:01:52.725 Library fdt found: NO 00:01:52.725 Library execinfo found: NO 00:01:52.725 Has header "execinfo.h" : YES 00:01:52.725 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:52.725 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:52.725 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:52.725 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:52.725 Run-time dependency openssl found: YES 3.0.9 00:01:52.725 Run-time dependency libpcap found: YES 1.10.4 00:01:52.725 Has header "pcap.h" with dependency libpcap: YES 00:01:52.725 Compiler for C supports arguments -Wcast-qual: YES 00:01:52.725 Compiler for C supports arguments -Wdeprecated: YES 00:01:52.725 Compiler for C supports arguments -Wformat: YES 00:01:52.725 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:52.725 Compiler for C supports arguments -Wformat-security: NO 00:01:52.725 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:52.725 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:52.725 Compiler for C supports arguments -Wnested-externs: YES 00:01:52.725 Compiler for C supports arguments -Wold-style-definition: YES 00:01:52.725 Compiler for C supports arguments -Wpointer-arith: YES 00:01:52.725 Compiler for C supports arguments -Wsign-compare: YES 00:01:52.725 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:52.725 Compiler for C supports arguments -Wundef: YES 00:01:52.725 Compiler for C supports arguments -Wwrite-strings: YES 00:01:52.725 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:52.725 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:52.725 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:52.725 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:52.725 Program objdump found: YES (/usr/bin/objdump) 00:01:52.725 Compiler for C supports arguments -mavx512f: YES 00:01:52.725 Checking if "AVX512 checking" compiles: YES 00:01:52.725 Fetching value of define "__SSE4_2__" : 1 00:01:52.725 Fetching value of define "__AES__" : 1 00:01:52.725 Fetching value of define "__AVX__" : 1 00:01:52.725 Fetching value of define "__AVX2__" : 1 00:01:52.725 Fetching value of define "__AVX512BW__" : 1 00:01:52.725 Fetching value of define "__AVX512CD__" : 1 00:01:52.725 Fetching value of define "__AVX512DQ__" : 1 00:01:52.725 Fetching value of define "__AVX512F__" : 1 00:01:52.725 Fetching value of define "__AVX512VL__" : 1 00:01:52.725 Fetching value of define "__PCLMUL__" : 1 00:01:52.725 Fetching value of define "__RDRND__" : 1 00:01:52.725 Fetching value of define "__RDSEED__" : 1 00:01:52.725 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:52.725 Fetching value of define "__znver1__" : (undefined) 00:01:52.725 Fetching value of define "__znver2__" : (undefined) 00:01:52.725 Fetching value of define "__znver3__" : (undefined) 00:01:52.725 Fetching value of define "__znver4__" : (undefined) 00:01:52.725 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:52.725 Message: lib/log: Defining dependency "log" 00:01:52.725 Message: lib/kvargs: Defining dependency "kvargs" 00:01:52.725 Message: lib/telemetry: Defining dependency "telemetry" 00:01:52.725 Checking for function "getentropy" : NO 00:01:52.725 Message: lib/eal: Defining dependency "eal" 00:01:52.725 Message: lib/ring: Defining dependency "ring" 00:01:52.725 Message: lib/rcu: Defining dependency "rcu" 00:01:52.725 Message: lib/mempool: Defining dependency "mempool" 00:01:52.725 Message: lib/mbuf: Defining dependency "mbuf" 00:01:52.725 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:52.725 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:52.725 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:52.725 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:52.725 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:52.725 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:52.725 Compiler for C supports arguments -mpclmul: YES 00:01:52.725 Compiler for C supports arguments -maes: YES 00:01:52.725 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:52.725 Compiler for C supports arguments -mavx512bw: YES 00:01:52.725 Compiler for C supports arguments -mavx512dq: YES 00:01:52.725 Compiler for C supports arguments -mavx512vl: YES 00:01:52.725 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:52.725 Compiler for C supports arguments -mavx2: YES 00:01:52.725 Compiler for C supports arguments -mavx: YES 00:01:52.725 Message: lib/net: Defining dependency "net" 00:01:52.725 Message: lib/meter: Defining dependency "meter" 00:01:52.725 Message: lib/ethdev: Defining dependency "ethdev" 00:01:52.725 Message: lib/pci: Defining dependency "pci" 00:01:52.725 Message: lib/cmdline: Defining dependency "cmdline" 00:01:52.725 Message: lib/hash: Defining dependency "hash" 00:01:52.725 Message: lib/timer: Defining dependency "timer" 00:01:52.725 Message: lib/compressdev: Defining dependency "compressdev" 00:01:52.725 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:52.725 Message: lib/dmadev: Defining dependency "dmadev" 00:01:52.725 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:52.725 Message: lib/power: Defining dependency "power" 00:01:52.725 Message: lib/reorder: Defining dependency "reorder" 00:01:52.725 Message: lib/security: Defining dependency "security" 00:01:52.725 Has header "linux/userfaultfd.h" : YES 00:01:52.725 Has header "linux/vduse.h" : YES 00:01:52.725 Message: lib/vhost: Defining dependency "vhost" 00:01:52.725 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:52.725 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:52.725 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:52.725 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:52.725 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:52.725 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:52.725 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:52.725 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:52.725 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:52.725 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:52.725 Program doxygen found: YES (/usr/bin/doxygen) 00:01:52.725 Configuring doxy-api-html.conf using configuration 00:01:52.725 Configuring doxy-api-man.conf using configuration 00:01:52.725 Program mandb found: YES (/usr/bin/mandb) 00:01:52.725 Program sphinx-build found: NO 00:01:52.725 Configuring rte_build_config.h using configuration 00:01:52.725 Message: 00:01:52.725 ================= 00:01:52.725 Applications Enabled 00:01:52.725 ================= 00:01:52.725 00:01:52.725 apps: 00:01:52.725 00:01:52.725 00:01:52.725 Message: 00:01:52.725 ================= 00:01:52.725 Libraries Enabled 00:01:52.725 ================= 00:01:52.725 00:01:52.725 libs: 00:01:52.725 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:52.725 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:52.725 cryptodev, dmadev, power, reorder, security, vhost, 00:01:52.725 00:01:52.725 Message: 00:01:52.725 =============== 00:01:52.725 Drivers Enabled 00:01:52.725 =============== 00:01:52.725 00:01:52.725 common: 00:01:52.725 00:01:52.725 bus: 00:01:52.725 pci, vdev, 00:01:52.725 mempool: 00:01:52.725 ring, 00:01:52.725 dma: 00:01:52.725 00:01:52.725 net: 00:01:52.725 00:01:52.725 crypto: 00:01:52.725 00:01:52.725 compress: 00:01:52.725 00:01:52.725 vdpa: 00:01:52.725 00:01:52.725 00:01:52.725 Message: 00:01:52.725 ================= 00:01:52.725 Content Skipped 00:01:52.725 ================= 00:01:52.725 00:01:52.725 apps: 00:01:52.725 dumpcap: explicitly disabled via build config 00:01:52.725 graph: explicitly disabled via build config 00:01:52.725 pdump: explicitly disabled via build config 00:01:52.725 proc-info: explicitly disabled via build config 00:01:52.725 test-acl: explicitly disabled via build config 00:01:52.725 test-bbdev: explicitly disabled via build config 00:01:52.725 test-cmdline: explicitly disabled via build config 00:01:52.725 test-compress-perf: explicitly disabled via build config 00:01:52.725 test-crypto-perf: explicitly disabled via build config 00:01:52.725 test-dma-perf: explicitly disabled via build config 00:01:52.725 test-eventdev: explicitly disabled via build config 00:01:52.725 test-fib: explicitly disabled via build config 00:01:52.725 test-flow-perf: explicitly disabled via build config 00:01:52.725 test-gpudev: explicitly disabled via build config 00:01:52.725 test-mldev: explicitly disabled via build config 00:01:52.725 test-pipeline: explicitly disabled via build config 00:01:52.725 test-pmd: explicitly disabled via build config 00:01:52.725 test-regex: explicitly disabled via build config 00:01:52.725 test-sad: explicitly disabled via build config 00:01:52.725 test-security-perf: explicitly disabled via build config 00:01:52.725 00:01:52.725 libs: 00:01:52.725 metrics: explicitly disabled via build config 00:01:52.725 acl: explicitly disabled via build config 00:01:52.725 bbdev: explicitly disabled via build config 00:01:52.725 bitratestats: explicitly disabled via build config 00:01:52.725 bpf: explicitly disabled via build config 00:01:52.725 cfgfile: explicitly disabled via build config 00:01:52.726 distributor: explicitly disabled via build config 00:01:52.726 efd: explicitly disabled via build config 00:01:52.726 eventdev: explicitly disabled via build config 00:01:52.726 dispatcher: explicitly disabled via build config 00:01:52.726 gpudev: explicitly disabled via build config 00:01:52.726 gro: explicitly disabled via build config 00:01:52.726 gso: explicitly disabled via build config 00:01:52.726 ip_frag: explicitly disabled via build config 00:01:52.726 jobstats: explicitly disabled via build config 00:01:52.726 latencystats: explicitly disabled via build config 00:01:52.726 lpm: explicitly disabled via build config 00:01:52.726 member: explicitly disabled via build config 00:01:52.726 pcapng: explicitly disabled via build config 00:01:52.726 rawdev: explicitly disabled via build config 00:01:52.726 regexdev: explicitly disabled via build config 00:01:52.726 mldev: explicitly disabled via build config 00:01:52.726 rib: explicitly disabled via build config 00:01:52.726 sched: explicitly disabled via build config 00:01:52.726 stack: explicitly disabled via build config 00:01:52.726 ipsec: explicitly disabled via build config 00:01:52.726 pdcp: explicitly disabled via build config 00:01:52.726 fib: explicitly disabled via build config 00:01:52.726 port: explicitly disabled via build config 00:01:52.726 pdump: explicitly disabled via build config 00:01:52.726 table: explicitly disabled via build config 00:01:52.726 pipeline: explicitly disabled via build config 00:01:52.726 graph: explicitly disabled via build config 00:01:52.726 node: explicitly disabled via build config 00:01:52.726 00:01:52.726 drivers: 00:01:52.726 common/cpt: not in enabled drivers build config 00:01:52.726 common/dpaax: not in enabled drivers build config 00:01:52.726 common/iavf: not in enabled drivers build config 00:01:52.726 common/idpf: not in enabled drivers build config 00:01:52.726 common/mvep: not in enabled drivers build config 00:01:52.726 common/octeontx: not in enabled drivers build config 00:01:52.726 bus/auxiliary: not in enabled drivers build config 00:01:52.726 bus/cdx: not in enabled drivers build config 00:01:52.726 bus/dpaa: not in enabled drivers build config 00:01:52.726 bus/fslmc: not in enabled drivers build config 00:01:52.726 bus/ifpga: not in enabled drivers build config 00:01:52.726 bus/platform: not in enabled drivers build config 00:01:52.726 bus/vmbus: not in enabled drivers build config 00:01:52.726 common/cnxk: not in enabled drivers build config 00:01:52.726 common/mlx5: not in enabled drivers build config 00:01:52.726 common/nfp: not in enabled drivers build config 00:01:52.726 common/qat: not in enabled drivers build config 00:01:52.726 common/sfc_efx: not in enabled drivers build config 00:01:52.726 mempool/bucket: not in enabled drivers build config 00:01:52.726 mempool/cnxk: not in enabled drivers build config 00:01:52.726 mempool/dpaa: not in enabled drivers build config 00:01:52.726 mempool/dpaa2: not in enabled drivers build config 00:01:52.726 mempool/octeontx: not in enabled drivers build config 00:01:52.726 mempool/stack: not in enabled drivers build config 00:01:52.726 dma/cnxk: not in enabled drivers build config 00:01:52.726 dma/dpaa: not in enabled drivers build config 00:01:52.726 dma/dpaa2: not in enabled drivers build config 00:01:52.726 dma/hisilicon: not in enabled drivers build config 00:01:52.726 dma/idxd: not in enabled drivers build config 00:01:52.726 dma/ioat: not in enabled drivers build config 00:01:52.726 dma/skeleton: not in enabled drivers build config 00:01:52.726 net/af_packet: not in enabled drivers build config 00:01:52.726 net/af_xdp: not in enabled drivers build config 00:01:52.726 net/ark: not in enabled drivers build config 00:01:52.726 net/atlantic: not in enabled drivers build config 00:01:52.726 net/avp: not in enabled drivers build config 00:01:52.726 net/axgbe: not in enabled drivers build config 00:01:52.726 net/bnx2x: not in enabled drivers build config 00:01:52.726 net/bnxt: not in enabled drivers build config 00:01:52.726 net/bonding: not in enabled drivers build config 00:01:52.726 net/cnxk: not in enabled drivers build config 00:01:52.726 net/cpfl: not in enabled drivers build config 00:01:52.726 net/cxgbe: not in enabled drivers build config 00:01:52.726 net/dpaa: not in enabled drivers build config 00:01:52.726 net/dpaa2: not in enabled drivers build config 00:01:52.726 net/e1000: not in enabled drivers build config 00:01:52.726 net/ena: not in enabled drivers build config 00:01:52.726 net/enetc: not in enabled drivers build config 00:01:52.726 net/enetfec: not in enabled drivers build config 00:01:52.726 net/enic: not in enabled drivers build config 00:01:52.726 net/failsafe: not in enabled drivers build config 00:01:52.726 net/fm10k: not in enabled drivers build config 00:01:52.726 net/gve: not in enabled drivers build config 00:01:52.726 net/hinic: not in enabled drivers build config 00:01:52.726 net/hns3: not in enabled drivers build config 00:01:52.726 net/i40e: not in enabled drivers build config 00:01:52.726 net/iavf: not in enabled drivers build config 00:01:52.726 net/ice: not in enabled drivers build config 00:01:52.726 net/idpf: not in enabled drivers build config 00:01:52.726 net/igc: not in enabled drivers build config 00:01:52.726 net/ionic: not in enabled drivers build config 00:01:52.726 net/ipn3ke: not in enabled drivers build config 00:01:52.726 net/ixgbe: not in enabled drivers build config 00:01:52.726 net/mana: not in enabled drivers build config 00:01:52.726 net/memif: not in enabled drivers build config 00:01:52.726 net/mlx4: not in enabled drivers build config 00:01:52.726 net/mlx5: not in enabled drivers build config 00:01:52.726 net/mvneta: not in enabled drivers build config 00:01:52.726 net/mvpp2: not in enabled drivers build config 00:01:52.726 net/netvsc: not in enabled drivers build config 00:01:52.726 net/nfb: not in enabled drivers build config 00:01:52.726 net/nfp: not in enabled drivers build config 00:01:52.726 net/ngbe: not in enabled drivers build config 00:01:52.726 net/null: not in enabled drivers build config 00:01:52.726 net/octeontx: not in enabled drivers build config 00:01:52.726 net/octeon_ep: not in enabled drivers build config 00:01:52.726 net/pcap: not in enabled drivers build config 00:01:52.726 net/pfe: not in enabled drivers build config 00:01:52.726 net/qede: not in enabled drivers build config 00:01:52.726 net/ring: not in enabled drivers build config 00:01:52.726 net/sfc: not in enabled drivers build config 00:01:52.726 net/softnic: not in enabled drivers build config 00:01:52.726 net/tap: not in enabled drivers build config 00:01:52.726 net/thunderx: not in enabled drivers build config 00:01:52.726 net/txgbe: not in enabled drivers build config 00:01:52.726 net/vdev_netvsc: not in enabled drivers build config 00:01:52.726 net/vhost: not in enabled drivers build config 00:01:52.726 net/virtio: not in enabled drivers build config 00:01:52.726 net/vmxnet3: not in enabled drivers build config 00:01:52.726 raw/*: missing internal dependency, "rawdev" 00:01:52.726 crypto/armv8: not in enabled drivers build config 00:01:52.726 crypto/bcmfs: not in enabled drivers build config 00:01:52.726 crypto/caam_jr: not in enabled drivers build config 00:01:52.726 crypto/ccp: not in enabled drivers build config 00:01:52.726 crypto/cnxk: not in enabled drivers build config 00:01:52.726 crypto/dpaa_sec: not in enabled drivers build config 00:01:52.726 crypto/dpaa2_sec: not in enabled drivers build config 00:01:52.726 crypto/ipsec_mb: not in enabled drivers build config 00:01:52.726 crypto/mlx5: not in enabled drivers build config 00:01:52.726 crypto/mvsam: not in enabled drivers build config 00:01:52.726 crypto/nitrox: not in enabled drivers build config 00:01:52.726 crypto/null: not in enabled drivers build config 00:01:52.726 crypto/octeontx: not in enabled drivers build config 00:01:52.726 crypto/openssl: not in enabled drivers build config 00:01:52.726 crypto/scheduler: not in enabled drivers build config 00:01:52.726 crypto/uadk: not in enabled drivers build config 00:01:52.726 crypto/virtio: not in enabled drivers build config 00:01:52.726 compress/isal: not in enabled drivers build config 00:01:52.726 compress/mlx5: not in enabled drivers build config 00:01:52.726 compress/octeontx: not in enabled drivers build config 00:01:52.726 compress/zlib: not in enabled drivers build config 00:01:52.726 regex/*: missing internal dependency, "regexdev" 00:01:52.726 ml/*: missing internal dependency, "mldev" 00:01:52.726 vdpa/ifc: not in enabled drivers build config 00:01:52.726 vdpa/mlx5: not in enabled drivers build config 00:01:52.726 vdpa/nfp: not in enabled drivers build config 00:01:52.726 vdpa/sfc: not in enabled drivers build config 00:01:52.726 event/*: missing internal dependency, "eventdev" 00:01:52.726 baseband/*: missing internal dependency, "bbdev" 00:01:52.726 gpu/*: missing internal dependency, "gpudev" 00:01:52.726 00:01:52.726 00:01:52.726 Build targets in project: 84 00:01:52.726 00:01:52.726 DPDK 23.11.0 00:01:52.726 00:01:52.726 User defined options 00:01:52.726 buildtype : debug 00:01:52.726 default_library : shared 00:01:52.726 libdir : lib 00:01:52.726 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:52.726 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:52.726 c_link_args : 00:01:52.726 cpu_instruction_set: native 00:01:52.726 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:52.726 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:52.726 enable_docs : false 00:01:52.726 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:52.726 enable_kmods : false 00:01:52.726 tests : false 00:01:52.726 00:01:52.726 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:52.726 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:52.726 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:52.726 [2/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:52.726 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:52.726 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:52.726 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:52.726 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:52.726 [7/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:52.726 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:52.726 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:52.727 [10/264] Linking static target lib/librte_kvargs.a 00:01:52.727 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:52.727 [12/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:52.727 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:52.727 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:52.727 [15/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:52.727 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:52.727 [17/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:52.727 [18/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:52.727 [19/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:52.727 [20/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:52.727 [21/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:52.727 [22/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:52.727 [23/264] Linking static target lib/librte_log.a 00:01:52.727 [24/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:52.727 [25/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:52.727 [26/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:52.727 [27/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:52.727 [28/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:52.727 [29/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:52.727 [30/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:52.727 [31/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:52.727 [32/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:52.727 [33/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:52.727 [34/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:52.727 [35/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:52.727 [36/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:52.727 [37/264] Linking static target lib/librte_pci.a 00:01:52.727 [38/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:52.727 [39/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:52.727 [40/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:52.727 [41/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:52.727 [42/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:52.727 [43/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:52.727 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:52.727 [45/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:52.727 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:52.727 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:52.727 [48/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.727 [49/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:52.727 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:52.727 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:52.727 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:52.727 [53/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.727 [54/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:52.727 [55/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:52.727 [56/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:52.727 [57/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:52.727 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:52.727 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:52.727 [60/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:52.727 [61/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:52.727 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:52.727 [63/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:52.727 [64/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:52.727 [65/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:52.988 [66/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:52.988 [67/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:52.988 [68/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:52.988 [69/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:52.988 [70/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:52.988 [71/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:52.988 [72/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:52.988 [73/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:52.988 [74/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:52.988 [75/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:52.988 [76/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:52.988 [77/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:52.988 [78/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:52.988 [79/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:52.988 [80/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:52.988 [81/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:52.988 [82/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:52.988 [83/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:52.988 [84/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:52.988 [85/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:52.988 [86/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:52.988 [87/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:52.988 [88/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:52.988 [89/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:52.988 [90/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:52.988 [91/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:52.988 [92/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:52.988 [93/264] Linking static target lib/librte_rcu.a 00:01:52.988 [94/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:52.988 [95/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:52.988 [96/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:52.988 [97/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:52.988 [98/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:52.988 [99/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:52.988 [100/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:52.988 [101/264] Linking static target lib/librte_telemetry.a 00:01:52.988 [102/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:52.988 [103/264] Linking static target lib/librte_ring.a 00:01:52.988 [104/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:52.988 [105/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:52.988 [106/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:52.988 [107/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:52.989 [108/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:52.989 [109/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:52.989 [110/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:52.989 [111/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:52.989 [112/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:52.989 [113/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:52.989 [114/264] Linking static target lib/librte_meter.a 00:01:52.989 [115/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:52.989 [116/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:52.989 [117/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:52.989 [118/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:52.989 [119/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:52.989 [120/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:52.989 [121/264] Linking static target lib/librte_mempool.a 00:01:52.989 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:52.989 [123/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:52.989 [124/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:52.989 [125/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:52.989 [126/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:52.989 [127/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:52.989 [128/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:52.989 [129/264] Linking static target lib/librte_cmdline.a 00:01:52.989 [130/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:52.989 [131/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:52.989 [132/264] Linking static target lib/librte_compressdev.a 00:01:52.989 [133/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:52.989 [134/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:52.989 [135/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:52.989 [136/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:52.989 [137/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:52.989 [138/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:52.989 [139/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:52.989 [140/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:52.989 [141/264] Linking static target lib/librte_timer.a 00:01:52.989 [142/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:52.989 [143/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:52.989 [144/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.989 [145/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:52.989 [146/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:52.989 [147/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:52.989 [148/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:52.989 [149/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:52.989 [150/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:52.989 [151/264] Linking target lib/librte_log.so.24.0 00:01:52.989 [152/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:52.989 [153/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:52.989 [154/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:52.989 [155/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:52.989 [156/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:52.989 [157/264] Linking static target lib/librte_reorder.a 00:01:52.989 [158/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:52.989 [159/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:52.989 [160/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:52.989 [161/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:52.989 [162/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:52.989 [163/264] Linking static target lib/librte_security.a 00:01:52.989 [164/264] Linking static target lib/librte_power.a 00:01:52.989 [165/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:52.989 [166/264] Linking static target lib/librte_dmadev.a 00:01:53.251 [167/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:53.251 [168/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:53.251 [169/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:53.251 [170/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:53.251 [171/264] Linking static target lib/librte_net.a 00:01:53.251 [172/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:53.251 [173/264] Linking static target lib/librte_eal.a 00:01:53.251 [174/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:53.251 [175/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:53.251 [176/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:53.251 [177/264] Linking static target lib/librte_mbuf.a 00:01:53.251 [178/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.251 [179/264] Linking target lib/librte_kvargs.so.24.0 00:01:53.251 [180/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:53.251 [181/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:53.251 [182/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:53.251 [183/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:53.251 [184/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:53.251 [185/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.251 [186/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:53.251 [187/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.251 [188/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:53.251 [189/264] Linking static target drivers/librte_bus_vdev.a 00:01:53.251 [190/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:53.251 [191/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:53.251 [192/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:53.251 [193/264] Linking static target drivers/librte_bus_pci.a 00:01:53.251 [194/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:53.251 [195/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:53.251 [196/264] Linking static target lib/librte_hash.a 00:01:53.513 [197/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:53.513 [198/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:53.513 [199/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:53.513 [200/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:53.513 [201/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:53.513 [202/264] Linking static target drivers/librte_mempool_ring.a 00:01:53.513 [203/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:53.513 [204/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.513 [205/264] Linking static target lib/librte_cryptodev.a 00:01:53.513 [206/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.513 [207/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.513 [208/264] Linking target lib/librte_telemetry.so.24.0 00:01:53.513 [209/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.513 [210/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:53.775 [211/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.775 [212/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.775 [213/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:53.775 [214/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.775 [215/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.775 [216/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.037 [217/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:54.037 [218/264] Linking static target lib/librte_ethdev.a 00:01:54.037 [219/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.037 [220/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.037 [221/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.299 [222/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.299 [223/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.246 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:55.246 [225/264] Linking static target lib/librte_vhost.a 00:01:55.508 [226/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.426 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.017 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.589 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.589 [230/264] Linking target lib/librte_eal.so.24.0 00:02:04.850 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:04.850 [232/264] Linking target lib/librte_ring.so.24.0 00:02:04.850 [233/264] Linking target lib/librte_timer.so.24.0 00:02:04.850 [234/264] Linking target lib/librte_meter.so.24.0 00:02:04.850 [235/264] Linking target lib/librte_pci.so.24.0 00:02:04.850 [236/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:04.850 [237/264] Linking target lib/librte_dmadev.so.24.0 00:02:04.850 [238/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:04.850 [239/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:04.850 [240/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:04.850 [241/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:04.850 [242/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:05.111 [243/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:05.111 [244/264] Linking target lib/librte_rcu.so.24.0 00:02:05.111 [245/264] Linking target lib/librte_mempool.so.24.0 00:02:05.111 [246/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:05.111 [247/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:05.372 [248/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:05.372 [249/264] Linking target lib/librte_mbuf.so.24.0 00:02:05.372 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:05.372 [251/264] Linking target lib/librte_compressdev.so.24.0 00:02:05.372 [252/264] Linking target lib/librte_reorder.so.24.0 00:02:05.372 [253/264] Linking target lib/librte_net.so.24.0 00:02:05.372 [254/264] Linking target lib/librte_cryptodev.so.24.0 00:02:05.634 [255/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:05.634 [256/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:05.634 [257/264] Linking target lib/librte_hash.so.24.0 00:02:05.634 [258/264] Linking target lib/librte_cmdline.so.24.0 00:02:05.634 [259/264] Linking target lib/librte_ethdev.so.24.0 00:02:05.634 [260/264] Linking target lib/librte_security.so.24.0 00:02:05.895 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:05.895 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:05.895 [263/264] Linking target lib/librte_power.so.24.0 00:02:05.895 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:05.895 INFO: autodetecting backend as ninja 00:02:05.895 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:06.838 CC lib/ut_mock/mock.o 00:02:06.839 CC lib/log/log.o 00:02:06.839 CC lib/log/log_flags.o 00:02:06.839 CC lib/log/log_deprecated.o 00:02:06.839 CC lib/ut/ut.o 00:02:07.101 LIB libspdk_ut_mock.a 00:02:07.101 LIB libspdk_log.a 00:02:07.101 SO libspdk_ut_mock.so.5.0 00:02:07.101 LIB libspdk_ut.a 00:02:07.101 SO libspdk_log.so.6.1 00:02:07.101 SO libspdk_ut.so.1.0 00:02:07.101 SYMLINK libspdk_ut_mock.so 00:02:07.101 SYMLINK libspdk_log.so 00:02:07.101 SYMLINK libspdk_ut.so 00:02:07.363 CXX lib/trace_parser/trace.o 00:02:07.363 CC lib/dma/dma.o 00:02:07.363 CC lib/util/base64.o 00:02:07.363 CC lib/ioat/ioat.o 00:02:07.363 CC lib/util/bit_array.o 00:02:07.363 CC lib/util/cpuset.o 00:02:07.363 CC lib/util/crc16.o 00:02:07.363 CC lib/util/crc32.o 00:02:07.363 CC lib/util/crc32c.o 00:02:07.363 CC lib/util/crc32_ieee.o 00:02:07.363 CC lib/util/crc64.o 00:02:07.363 CC lib/util/dif.o 00:02:07.363 CC lib/util/fd.o 00:02:07.363 CC lib/util/file.o 00:02:07.363 CC lib/util/hexlify.o 00:02:07.363 CC lib/util/iov.o 00:02:07.363 CC lib/util/math.o 00:02:07.363 CC lib/util/pipe.o 00:02:07.363 CC lib/util/strerror_tls.o 00:02:07.363 CC lib/util/string.o 00:02:07.363 CC lib/util/uuid.o 00:02:07.363 CC lib/util/fd_group.o 00:02:07.363 CC lib/util/xor.o 00:02:07.363 CC lib/util/zipf.o 00:02:07.363 CC lib/vfio_user/host/vfio_user_pci.o 00:02:07.363 CC lib/vfio_user/host/vfio_user.o 00:02:07.625 LIB libspdk_dma.a 00:02:07.625 SO libspdk_dma.so.3.0 00:02:07.625 LIB libspdk_ioat.a 00:02:07.625 SYMLINK libspdk_dma.so 00:02:07.625 SO libspdk_ioat.so.6.0 00:02:07.625 LIB libspdk_vfio_user.a 00:02:07.625 SYMLINK libspdk_ioat.so 00:02:07.886 SO libspdk_vfio_user.so.4.0 00:02:07.886 SYMLINK libspdk_vfio_user.so 00:02:07.886 LIB libspdk_util.a 00:02:07.886 SO libspdk_util.so.8.0 00:02:08.155 SYMLINK libspdk_util.so 00:02:08.155 LIB libspdk_trace_parser.a 00:02:08.155 SO libspdk_trace_parser.so.4.0 00:02:08.417 SYMLINK libspdk_trace_parser.so 00:02:08.417 CC lib/conf/conf.o 00:02:08.417 CC lib/json/json_parse.o 00:02:08.417 CC lib/json/json_util.o 00:02:08.417 CC lib/json/json_write.o 00:02:08.417 CC lib/rdma/common.o 00:02:08.417 CC lib/rdma/rdma_verbs.o 00:02:08.417 CC lib/vmd/vmd.o 00:02:08.417 CC lib/vmd/led.o 00:02:08.417 CC lib/env_dpdk/env.o 00:02:08.417 CC lib/env_dpdk/memory.o 00:02:08.417 CC lib/env_dpdk/pci.o 00:02:08.417 CC lib/env_dpdk/init.o 00:02:08.417 CC lib/idxd/idxd.o 00:02:08.417 CC lib/env_dpdk/threads.o 00:02:08.417 CC lib/idxd/idxd_user.o 00:02:08.417 CC lib/env_dpdk/pci_ioat.o 00:02:08.417 CC lib/env_dpdk/pci_virtio.o 00:02:08.417 CC lib/idxd/idxd_kernel.o 00:02:08.417 CC lib/env_dpdk/pci_vmd.o 00:02:08.417 CC lib/env_dpdk/pci_idxd.o 00:02:08.417 CC lib/env_dpdk/sigbus_handler.o 00:02:08.417 CC lib/env_dpdk/pci_event.o 00:02:08.417 CC lib/env_dpdk/pci_dpdk.o 00:02:08.417 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:08.417 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:08.417 LIB libspdk_conf.a 00:02:08.679 SO libspdk_conf.so.5.0 00:02:08.679 LIB libspdk_json.a 00:02:08.679 LIB libspdk_rdma.a 00:02:08.679 SO libspdk_json.so.5.1 00:02:08.679 SYMLINK libspdk_conf.so 00:02:08.679 SO libspdk_rdma.so.5.0 00:02:08.679 SYMLINK libspdk_json.so 00:02:08.679 SYMLINK libspdk_rdma.so 00:02:08.679 LIB libspdk_idxd.a 00:02:08.940 SO libspdk_idxd.so.11.0 00:02:08.940 LIB libspdk_vmd.a 00:02:08.940 CC lib/jsonrpc/jsonrpc_server.o 00:02:08.940 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:08.940 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:08.940 CC lib/jsonrpc/jsonrpc_client.o 00:02:08.940 SYMLINK libspdk_idxd.so 00:02:08.940 SO libspdk_vmd.so.5.0 00:02:08.940 SYMLINK libspdk_vmd.so 00:02:09.202 LIB libspdk_jsonrpc.a 00:02:09.202 SO libspdk_jsonrpc.so.5.1 00:02:09.202 SYMLINK libspdk_jsonrpc.so 00:02:09.462 CC lib/rpc/rpc.o 00:02:09.462 LIB libspdk_env_dpdk.a 00:02:09.462 SO libspdk_env_dpdk.so.13.0 00:02:09.724 LIB libspdk_rpc.a 00:02:09.724 SYMLINK libspdk_env_dpdk.so 00:02:09.724 SO libspdk_rpc.so.5.0 00:02:09.724 SYMLINK libspdk_rpc.so 00:02:09.986 CC lib/notify/notify.o 00:02:09.986 CC lib/notify/notify_rpc.o 00:02:09.986 CC lib/trace/trace.o 00:02:09.986 CC lib/sock/sock.o 00:02:09.986 CC lib/trace/trace_flags.o 00:02:09.986 CC lib/trace/trace_rpc.o 00:02:09.986 CC lib/sock/sock_rpc.o 00:02:10.248 LIB libspdk_notify.a 00:02:10.248 SO libspdk_notify.so.5.0 00:02:10.248 LIB libspdk_trace.a 00:02:10.248 SO libspdk_trace.so.9.0 00:02:10.248 SYMLINK libspdk_notify.so 00:02:10.248 SYMLINK libspdk_trace.so 00:02:10.248 LIB libspdk_sock.a 00:02:10.510 SO libspdk_sock.so.8.0 00:02:10.510 SYMLINK libspdk_sock.so 00:02:10.510 CC lib/thread/thread.o 00:02:10.510 CC lib/thread/iobuf.o 00:02:10.772 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:10.772 CC lib/nvme/nvme_ctrlr.o 00:02:10.772 CC lib/nvme/nvme_fabric.o 00:02:10.772 CC lib/nvme/nvme_ns_cmd.o 00:02:10.772 CC lib/nvme/nvme_ns.o 00:02:10.772 CC lib/nvme/nvme_pcie_common.o 00:02:10.772 CC lib/nvme/nvme_pcie.o 00:02:10.772 CC lib/nvme/nvme_quirks.o 00:02:10.772 CC lib/nvme/nvme_qpair.o 00:02:10.772 CC lib/nvme/nvme.o 00:02:10.772 CC lib/nvme/nvme_transport.o 00:02:10.772 CC lib/nvme/nvme_discovery.o 00:02:10.772 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:10.772 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:10.772 CC lib/nvme/nvme_tcp.o 00:02:10.772 CC lib/nvme/nvme_opal.o 00:02:10.772 CC lib/nvme/nvme_io_msg.o 00:02:10.772 CC lib/nvme/nvme_poll_group.o 00:02:10.772 CC lib/nvme/nvme_zns.o 00:02:10.772 CC lib/nvme/nvme_cuse.o 00:02:10.772 CC lib/nvme/nvme_vfio_user.o 00:02:10.772 CC lib/nvme/nvme_rdma.o 00:02:12.161 LIB libspdk_thread.a 00:02:12.161 SO libspdk_thread.so.9.0 00:02:12.161 SYMLINK libspdk_thread.so 00:02:12.161 CC lib/blob/blobstore.o 00:02:12.161 CC lib/virtio/virtio.o 00:02:12.161 CC lib/blob/request.o 00:02:12.161 CC lib/blob/zeroes.o 00:02:12.161 CC lib/virtio/virtio_vhost_user.o 00:02:12.161 CC lib/blob/blob_bs_dev.o 00:02:12.161 CC lib/virtio/virtio_vfio_user.o 00:02:12.161 CC lib/virtio/virtio_pci.o 00:02:12.161 CC lib/accel/accel.o 00:02:12.161 CC lib/init/json_config.o 00:02:12.161 CC lib/accel/accel_rpc.o 00:02:12.161 CC lib/init/subsystem.o 00:02:12.161 CC lib/accel/accel_sw.o 00:02:12.161 CC lib/init/subsystem_rpc.o 00:02:12.161 CC lib/init/rpc.o 00:02:12.423 LIB libspdk_init.a 00:02:12.423 SO libspdk_init.so.4.0 00:02:12.685 LIB libspdk_virtio.a 00:02:12.685 LIB libspdk_nvme.a 00:02:12.685 SO libspdk_virtio.so.6.0 00:02:12.685 SYMLINK libspdk_init.so 00:02:12.685 SO libspdk_nvme.so.12.0 00:02:12.685 SYMLINK libspdk_virtio.so 00:02:12.947 CC lib/event/app.o 00:02:12.947 CC lib/event/reactor.o 00:02:12.947 CC lib/event/log_rpc.o 00:02:12.947 CC lib/event/app_rpc.o 00:02:12.947 CC lib/event/scheduler_static.o 00:02:12.947 SYMLINK libspdk_nvme.so 00:02:13.209 LIB libspdk_accel.a 00:02:13.209 SO libspdk_accel.so.14.0 00:02:13.209 LIB libspdk_event.a 00:02:13.209 SYMLINK libspdk_accel.so 00:02:13.209 SO libspdk_event.so.12.0 00:02:13.471 SYMLINK libspdk_event.so 00:02:13.471 CC lib/bdev/bdev.o 00:02:13.471 CC lib/bdev/bdev_rpc.o 00:02:13.471 CC lib/bdev/bdev_zone.o 00:02:13.471 CC lib/bdev/part.o 00:02:13.471 CC lib/bdev/scsi_nvme.o 00:02:14.859 LIB libspdk_blob.a 00:02:14.859 SO libspdk_blob.so.10.1 00:02:14.859 SYMLINK libspdk_blob.so 00:02:14.859 CC lib/blobfs/blobfs.o 00:02:14.859 CC lib/blobfs/tree.o 00:02:14.859 CC lib/lvol/lvol.o 00:02:15.805 LIB libspdk_bdev.a 00:02:15.805 LIB libspdk_blobfs.a 00:02:15.805 SO libspdk_blobfs.so.9.0 00:02:15.805 SO libspdk_bdev.so.14.0 00:02:15.805 LIB libspdk_lvol.a 00:02:15.805 SO libspdk_lvol.so.9.1 00:02:15.805 SYMLINK libspdk_blobfs.so 00:02:15.805 SYMLINK libspdk_bdev.so 00:02:15.805 SYMLINK libspdk_lvol.so 00:02:16.066 CC lib/nbd/nbd.o 00:02:16.066 CC lib/ublk/ublk.o 00:02:16.066 CC lib/nbd/nbd_rpc.o 00:02:16.066 CC lib/ublk/ublk_rpc.o 00:02:16.066 CC lib/scsi/dev.o 00:02:16.066 CC lib/nvmf/ctrlr.o 00:02:16.066 CC lib/scsi/lun.o 00:02:16.066 CC lib/nvmf/ctrlr_discovery.o 00:02:16.066 CC lib/ftl/ftl_core.o 00:02:16.066 CC lib/scsi/port.o 00:02:16.066 CC lib/nvmf/ctrlr_bdev.o 00:02:16.066 CC lib/ftl/ftl_init.o 00:02:16.066 CC lib/scsi/scsi.o 00:02:16.066 CC lib/scsi/scsi_bdev.o 00:02:16.066 CC lib/nvmf/subsystem.o 00:02:16.066 CC lib/ftl/ftl_layout.o 00:02:16.066 CC lib/nvmf/nvmf.o 00:02:16.066 CC lib/scsi/scsi_pr.o 00:02:16.066 CC lib/ftl/ftl_debug.o 00:02:16.066 CC lib/nvmf/nvmf_rpc.o 00:02:16.066 CC lib/scsi/scsi_rpc.o 00:02:16.066 CC lib/ftl/ftl_io.o 00:02:16.066 CC lib/nvmf/transport.o 00:02:16.066 CC lib/ftl/ftl_sb.o 00:02:16.066 CC lib/scsi/task.o 00:02:16.066 CC lib/nvmf/tcp.o 00:02:16.066 CC lib/ftl/ftl_l2p.o 00:02:16.066 CC lib/ftl/ftl_l2p_flat.o 00:02:16.066 CC lib/nvmf/rdma.o 00:02:16.066 CC lib/ftl/ftl_band.o 00:02:16.066 CC lib/ftl/ftl_nv_cache.o 00:02:16.066 CC lib/ftl/ftl_band_ops.o 00:02:16.066 CC lib/ftl/ftl_writer.o 00:02:16.066 CC lib/ftl/ftl_rq.o 00:02:16.066 CC lib/ftl/ftl_reloc.o 00:02:16.066 CC lib/ftl/ftl_l2p_cache.o 00:02:16.066 CC lib/ftl/ftl_p2l.o 00:02:16.066 CC lib/ftl/mngt/ftl_mngt.o 00:02:16.066 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:16.066 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:16.066 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:16.066 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:16.066 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:16.066 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:16.066 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:16.066 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:16.066 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:16.066 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:16.066 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:16.066 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:16.066 CC lib/ftl/utils/ftl_conf.o 00:02:16.066 CC lib/ftl/utils/ftl_mempool.o 00:02:16.066 CC lib/ftl/utils/ftl_md.o 00:02:16.066 CC lib/ftl/utils/ftl_property.o 00:02:16.066 CC lib/ftl/utils/ftl_bitmap.o 00:02:16.066 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:16.066 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:16.066 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:16.066 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:16.066 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:16.066 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:16.066 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:16.066 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:16.066 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:16.066 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:16.066 CC lib/ftl/base/ftl_base_dev.o 00:02:16.066 CC lib/ftl/base/ftl_base_bdev.o 00:02:16.066 CC lib/ftl/ftl_trace.o 00:02:16.326 LIB libspdk_nbd.a 00:02:16.587 SO libspdk_nbd.so.6.0 00:02:16.587 SYMLINK libspdk_nbd.so 00:02:16.587 LIB libspdk_scsi.a 00:02:16.587 SO libspdk_scsi.so.8.0 00:02:16.587 LIB libspdk_ublk.a 00:02:16.587 SO libspdk_ublk.so.2.0 00:02:16.587 SYMLINK libspdk_scsi.so 00:02:16.849 SYMLINK libspdk_ublk.so 00:02:16.849 LIB libspdk_ftl.a 00:02:16.849 CC lib/iscsi/conn.o 00:02:16.849 CC lib/vhost/vhost.o 00:02:16.849 CC lib/iscsi/init_grp.o 00:02:16.849 CC lib/vhost/vhost_rpc.o 00:02:16.849 CC lib/iscsi/iscsi.o 00:02:16.849 CC lib/vhost/vhost_scsi.o 00:02:16.849 CC lib/iscsi/md5.o 00:02:16.849 CC lib/vhost/vhost_blk.o 00:02:16.849 CC lib/iscsi/param.o 00:02:16.849 CC lib/vhost/rte_vhost_user.o 00:02:16.849 CC lib/iscsi/portal_grp.o 00:02:16.849 CC lib/iscsi/tgt_node.o 00:02:16.849 CC lib/iscsi/iscsi_subsystem.o 00:02:16.849 CC lib/iscsi/iscsi_rpc.o 00:02:16.849 CC lib/iscsi/task.o 00:02:17.110 SO libspdk_ftl.so.8.0 00:02:17.371 SYMLINK libspdk_ftl.so 00:02:17.944 LIB libspdk_nvmf.a 00:02:17.944 SO libspdk_nvmf.so.17.0 00:02:17.944 LIB libspdk_vhost.a 00:02:17.944 SO libspdk_vhost.so.7.1 00:02:17.944 SYMLINK libspdk_nvmf.so 00:02:17.944 SYMLINK libspdk_vhost.so 00:02:17.944 LIB libspdk_iscsi.a 00:02:18.204 SO libspdk_iscsi.so.7.0 00:02:18.204 SYMLINK libspdk_iscsi.so 00:02:18.777 CC module/env_dpdk/env_dpdk_rpc.o 00:02:18.777 CC module/blob/bdev/blob_bdev.o 00:02:18.777 CC module/sock/posix/posix.o 00:02:18.777 CC module/accel/error/accel_error.o 00:02:18.777 CC module/accel/error/accel_error_rpc.o 00:02:18.777 CC module/accel/ioat/accel_ioat.o 00:02:18.777 CC module/accel/dsa/accel_dsa.o 00:02:18.777 CC module/scheduler/gscheduler/gscheduler.o 00:02:18.777 CC module/accel/ioat/accel_ioat_rpc.o 00:02:18.777 CC module/accel/dsa/accel_dsa_rpc.o 00:02:18.777 CC module/accel/iaa/accel_iaa.o 00:02:18.777 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:18.777 CC module/accel/iaa/accel_iaa_rpc.o 00:02:18.777 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:18.777 LIB libspdk_env_dpdk_rpc.a 00:02:18.777 SO libspdk_env_dpdk_rpc.so.5.0 00:02:19.038 SYMLINK libspdk_env_dpdk_rpc.so 00:02:19.038 LIB libspdk_scheduler_gscheduler.a 00:02:19.038 LIB libspdk_accel_error.a 00:02:19.038 SO libspdk_scheduler_gscheduler.so.3.0 00:02:19.038 LIB libspdk_scheduler_dpdk_governor.a 00:02:19.038 LIB libspdk_accel_ioat.a 00:02:19.038 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:19.038 LIB libspdk_scheduler_dynamic.a 00:02:19.038 SO libspdk_accel_error.so.1.0 00:02:19.038 LIB libspdk_accel_iaa.a 00:02:19.038 LIB libspdk_blob_bdev.a 00:02:19.038 SYMLINK libspdk_scheduler_gscheduler.so 00:02:19.038 LIB libspdk_accel_dsa.a 00:02:19.038 SO libspdk_accel_ioat.so.5.0 00:02:19.038 SO libspdk_scheduler_dynamic.so.3.0 00:02:19.038 SO libspdk_blob_bdev.so.10.1 00:02:19.038 SO libspdk_accel_iaa.so.2.0 00:02:19.038 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:19.038 SYMLINK libspdk_accel_error.so 00:02:19.038 SO libspdk_accel_dsa.so.4.0 00:02:19.038 SYMLINK libspdk_accel_ioat.so 00:02:19.038 SYMLINK libspdk_scheduler_dynamic.so 00:02:19.038 SYMLINK libspdk_blob_bdev.so 00:02:19.038 SYMLINK libspdk_accel_iaa.so 00:02:19.038 SYMLINK libspdk_accel_dsa.so 00:02:19.299 LIB libspdk_sock_posix.a 00:02:19.559 SO libspdk_sock_posix.so.5.0 00:02:19.559 CC module/bdev/gpt/gpt.o 00:02:19.559 CC module/bdev/gpt/vbdev_gpt.o 00:02:19.559 CC module/bdev/raid/bdev_raid.o 00:02:19.559 CC module/bdev/raid/bdev_raid_rpc.o 00:02:19.559 CC module/blobfs/bdev/blobfs_bdev.o 00:02:19.559 CC module/bdev/raid/bdev_raid_sb.o 00:02:19.559 CC module/bdev/raid/raid1.o 00:02:19.559 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:19.559 CC module/bdev/raid/raid0.o 00:02:19.559 CC module/bdev/delay/vbdev_delay.o 00:02:19.559 CC module/bdev/raid/concat.o 00:02:19.559 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:19.559 CC module/bdev/null/bdev_null.o 00:02:19.559 CC module/bdev/null/bdev_null_rpc.o 00:02:19.559 CC module/bdev/passthru/vbdev_passthru.o 00:02:19.559 CC module/bdev/error/vbdev_error.o 00:02:19.559 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:19.559 CC module/bdev/error/vbdev_error_rpc.o 00:02:19.559 CC module/bdev/lvol/vbdev_lvol.o 00:02:19.559 CC module/bdev/split/vbdev_split.o 00:02:19.559 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:19.559 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:19.559 CC module/bdev/aio/bdev_aio_rpc.o 00:02:19.559 CC module/bdev/nvme/bdev_nvme.o 00:02:19.559 CC module/bdev/aio/bdev_aio.o 00:02:19.559 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:19.559 CC module/bdev/malloc/bdev_malloc.o 00:02:19.559 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:19.559 CC module/bdev/split/vbdev_split_rpc.o 00:02:19.559 CC module/bdev/ftl/bdev_ftl.o 00:02:19.559 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:19.559 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:19.559 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:19.559 CC module/bdev/nvme/nvme_rpc.o 00:02:19.559 CC module/bdev/nvme/bdev_mdns_client.o 00:02:19.559 CC module/bdev/nvme/vbdev_opal.o 00:02:19.559 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:19.559 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:19.559 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:19.559 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:19.559 CC module/bdev/iscsi/bdev_iscsi.o 00:02:19.559 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:19.559 SYMLINK libspdk_sock_posix.so 00:02:19.819 LIB libspdk_blobfs_bdev.a 00:02:19.819 SO libspdk_blobfs_bdev.so.5.0 00:02:19.819 LIB libspdk_bdev_split.a 00:02:19.819 LIB libspdk_bdev_null.a 00:02:19.819 LIB libspdk_bdev_error.a 00:02:19.819 LIB libspdk_bdev_gpt.a 00:02:19.819 LIB libspdk_bdev_passthru.a 00:02:19.819 SO libspdk_bdev_error.so.5.0 00:02:19.819 SO libspdk_bdev_split.so.5.0 00:02:19.819 SO libspdk_bdev_null.so.5.0 00:02:19.819 SYMLINK libspdk_blobfs_bdev.so 00:02:19.819 SO libspdk_bdev_gpt.so.5.0 00:02:19.819 LIB libspdk_bdev_aio.a 00:02:19.819 LIB libspdk_bdev_ftl.a 00:02:19.819 SO libspdk_bdev_passthru.so.5.0 00:02:19.819 SO libspdk_bdev_aio.so.5.0 00:02:19.819 LIB libspdk_bdev_malloc.a 00:02:19.819 LIB libspdk_bdev_delay.a 00:02:19.819 SYMLINK libspdk_bdev_error.so 00:02:19.819 SYMLINK libspdk_bdev_null.so 00:02:19.819 LIB libspdk_bdev_zone_block.a 00:02:19.819 SYMLINK libspdk_bdev_split.so 00:02:19.819 SO libspdk_bdev_ftl.so.5.0 00:02:19.819 LIB libspdk_bdev_iscsi.a 00:02:19.819 SYMLINK libspdk_bdev_gpt.so 00:02:19.819 SO libspdk_bdev_malloc.so.5.0 00:02:19.819 SYMLINK libspdk_bdev_passthru.so 00:02:19.819 SO libspdk_bdev_delay.so.5.0 00:02:19.819 SYMLINK libspdk_bdev_aio.so 00:02:19.819 SO libspdk_bdev_zone_block.so.5.0 00:02:19.819 SO libspdk_bdev_iscsi.so.5.0 00:02:19.819 SYMLINK libspdk_bdev_ftl.so 00:02:20.081 LIB libspdk_bdev_lvol.a 00:02:20.081 SYMLINK libspdk_bdev_malloc.so 00:02:20.081 SYMLINK libspdk_bdev_zone_block.so 00:02:20.081 SYMLINK libspdk_bdev_delay.so 00:02:20.081 SYMLINK libspdk_bdev_iscsi.so 00:02:20.081 LIB libspdk_bdev_virtio.a 00:02:20.081 SO libspdk_bdev_lvol.so.5.0 00:02:20.081 SO libspdk_bdev_virtio.so.5.0 00:02:20.081 SYMLINK libspdk_bdev_lvol.so 00:02:20.081 SYMLINK libspdk_bdev_virtio.so 00:02:20.343 LIB libspdk_bdev_raid.a 00:02:20.343 SO libspdk_bdev_raid.so.5.0 00:02:20.343 SYMLINK libspdk_bdev_raid.so 00:02:21.288 LIB libspdk_bdev_nvme.a 00:02:21.288 SO libspdk_bdev_nvme.so.6.0 00:02:21.550 SYMLINK libspdk_bdev_nvme.so 00:02:21.811 CC module/event/subsystems/iobuf/iobuf.o 00:02:21.811 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:21.811 CC module/event/subsystems/scheduler/scheduler.o 00:02:21.811 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:21.811 CC module/event/subsystems/vmd/vmd.o 00:02:21.811 CC module/event/subsystems/sock/sock.o 00:02:21.811 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:22.072 LIB libspdk_event_scheduler.a 00:02:22.072 LIB libspdk_event_sock.a 00:02:22.072 LIB libspdk_event_vhost_blk.a 00:02:22.073 LIB libspdk_event_iobuf.a 00:02:22.073 LIB libspdk_event_vmd.a 00:02:22.073 SO libspdk_event_scheduler.so.3.0 00:02:22.073 SO libspdk_event_iobuf.so.2.0 00:02:22.073 SO libspdk_event_sock.so.4.0 00:02:22.073 SO libspdk_event_vhost_blk.so.2.0 00:02:22.073 SO libspdk_event_vmd.so.5.0 00:02:22.073 SYMLINK libspdk_event_scheduler.so 00:02:22.073 SYMLINK libspdk_event_iobuf.so 00:02:22.073 SYMLINK libspdk_event_sock.so 00:02:22.073 SYMLINK libspdk_event_vhost_blk.so 00:02:22.073 SYMLINK libspdk_event_vmd.so 00:02:22.334 CC module/event/subsystems/accel/accel.o 00:02:22.594 LIB libspdk_event_accel.a 00:02:22.594 SO libspdk_event_accel.so.5.0 00:02:22.594 SYMLINK libspdk_event_accel.so 00:02:22.855 CC module/event/subsystems/bdev/bdev.o 00:02:23.116 LIB libspdk_event_bdev.a 00:02:23.116 SO libspdk_event_bdev.so.5.0 00:02:23.116 SYMLINK libspdk_event_bdev.so 00:02:23.377 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:23.377 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:23.377 CC module/event/subsystems/scsi/scsi.o 00:02:23.377 CC module/event/subsystems/nbd/nbd.o 00:02:23.377 CC module/event/subsystems/ublk/ublk.o 00:02:23.638 LIB libspdk_event_ublk.a 00:02:23.638 LIB libspdk_event_nbd.a 00:02:23.638 LIB libspdk_event_scsi.a 00:02:23.638 SO libspdk_event_ublk.so.2.0 00:02:23.638 SO libspdk_event_nbd.so.5.0 00:02:23.638 LIB libspdk_event_nvmf.a 00:02:23.638 SO libspdk_event_scsi.so.5.0 00:02:23.638 SYMLINK libspdk_event_ublk.so 00:02:23.638 SO libspdk_event_nvmf.so.5.0 00:02:23.638 SYMLINK libspdk_event_nbd.so 00:02:23.638 SYMLINK libspdk_event_scsi.so 00:02:23.638 SYMLINK libspdk_event_nvmf.so 00:02:23.962 CC module/event/subsystems/iscsi/iscsi.o 00:02:23.962 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:23.962 LIB libspdk_event_vhost_scsi.a 00:02:24.223 LIB libspdk_event_iscsi.a 00:02:24.223 SO libspdk_event_vhost_scsi.so.2.0 00:02:24.223 SO libspdk_event_iscsi.so.5.0 00:02:24.223 SYMLINK libspdk_event_vhost_scsi.so 00:02:24.223 SYMLINK libspdk_event_iscsi.so 00:02:24.223 SO libspdk.so.5.0 00:02:24.223 SYMLINK libspdk.so 00:02:24.800 CC app/spdk_nvme_discover/discovery_aer.o 00:02:24.800 CC app/trace_record/trace_record.o 00:02:24.800 CC app/spdk_top/spdk_top.o 00:02:24.800 TEST_HEADER include/spdk/accel.h 00:02:24.800 TEST_HEADER include/spdk/accel_module.h 00:02:24.800 TEST_HEADER include/spdk/barrier.h 00:02:24.800 TEST_HEADER include/spdk/assert.h 00:02:24.800 TEST_HEADER include/spdk/base64.h 00:02:24.800 TEST_HEADER include/spdk/bdev.h 00:02:24.800 TEST_HEADER include/spdk/bdev_module.h 00:02:24.800 TEST_HEADER include/spdk/bdev_zone.h 00:02:24.800 CC app/spdk_nvme_identify/identify.o 00:02:24.800 TEST_HEADER include/spdk/bit_pool.h 00:02:24.800 TEST_HEADER include/spdk/bit_array.h 00:02:24.800 CXX app/trace/trace.o 00:02:24.800 TEST_HEADER include/spdk/blob_bdev.h 00:02:24.800 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:24.800 CC app/spdk_lspci/spdk_lspci.o 00:02:24.800 CC app/spdk_nvme_perf/perf.o 00:02:24.800 CC test/rpc_client/rpc_client_test.o 00:02:24.800 TEST_HEADER include/spdk/conf.h 00:02:24.800 TEST_HEADER include/spdk/config.h 00:02:24.800 TEST_HEADER include/spdk/cpuset.h 00:02:24.800 TEST_HEADER include/spdk/blob.h 00:02:24.800 TEST_HEADER include/spdk/blobfs.h 00:02:24.800 TEST_HEADER include/spdk/crc64.h 00:02:24.800 TEST_HEADER include/spdk/crc16.h 00:02:24.800 TEST_HEADER include/spdk/crc32.h 00:02:24.800 TEST_HEADER include/spdk/dma.h 00:02:24.800 TEST_HEADER include/spdk/endian.h 00:02:24.800 TEST_HEADER include/spdk/dif.h 00:02:24.800 TEST_HEADER include/spdk/env_dpdk.h 00:02:24.800 TEST_HEADER include/spdk/env.h 00:02:24.800 CC app/iscsi_tgt/iscsi_tgt.o 00:02:24.800 TEST_HEADER include/spdk/fd_group.h 00:02:24.800 TEST_HEADER include/spdk/event.h 00:02:24.800 CC app/nvmf_tgt/nvmf_main.o 00:02:24.800 TEST_HEADER include/spdk/fd.h 00:02:24.800 TEST_HEADER include/spdk/ftl.h 00:02:24.800 TEST_HEADER include/spdk/file.h 00:02:24.800 TEST_HEADER include/spdk/gpt_spec.h 00:02:24.800 CC app/spdk_dd/spdk_dd.o 00:02:24.800 TEST_HEADER include/spdk/hexlify.h 00:02:24.800 TEST_HEADER include/spdk/histogram_data.h 00:02:24.800 TEST_HEADER include/spdk/idxd.h 00:02:24.800 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:24.800 TEST_HEADER include/spdk/init.h 00:02:24.800 TEST_HEADER include/spdk/idxd_spec.h 00:02:24.800 CC app/vhost/vhost.o 00:02:24.800 TEST_HEADER include/spdk/ioat.h 00:02:24.800 TEST_HEADER include/spdk/ioat_spec.h 00:02:24.801 TEST_HEADER include/spdk/iscsi_spec.h 00:02:24.801 TEST_HEADER include/spdk/json.h 00:02:24.801 TEST_HEADER include/spdk/jsonrpc.h 00:02:24.801 TEST_HEADER include/spdk/likely.h 00:02:24.801 TEST_HEADER include/spdk/log.h 00:02:24.801 TEST_HEADER include/spdk/lvol.h 00:02:24.801 TEST_HEADER include/spdk/memory.h 00:02:24.801 TEST_HEADER include/spdk/mmio.h 00:02:24.801 TEST_HEADER include/spdk/nbd.h 00:02:24.801 TEST_HEADER include/spdk/notify.h 00:02:24.801 TEST_HEADER include/spdk/nvme.h 00:02:24.801 TEST_HEADER include/spdk/nvme_intel.h 00:02:24.801 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:24.801 TEST_HEADER include/spdk/nvme_spec.h 00:02:24.801 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:24.801 TEST_HEADER include/spdk/nvme_zns.h 00:02:24.801 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:24.801 CC app/spdk_tgt/spdk_tgt.o 00:02:24.801 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:24.801 TEST_HEADER include/spdk/nvmf.h 00:02:24.801 TEST_HEADER include/spdk/nvmf_spec.h 00:02:24.801 TEST_HEADER include/spdk/opal.h 00:02:24.801 TEST_HEADER include/spdk/nvmf_transport.h 00:02:24.801 TEST_HEADER include/spdk/opal_spec.h 00:02:24.801 TEST_HEADER include/spdk/pci_ids.h 00:02:24.801 TEST_HEADER include/spdk/pipe.h 00:02:24.801 TEST_HEADER include/spdk/rpc.h 00:02:24.801 TEST_HEADER include/spdk/reduce.h 00:02:24.801 TEST_HEADER include/spdk/queue.h 00:02:24.801 TEST_HEADER include/spdk/scheduler.h 00:02:24.801 TEST_HEADER include/spdk/scsi.h 00:02:24.801 TEST_HEADER include/spdk/sock.h 00:02:24.801 TEST_HEADER include/spdk/scsi_spec.h 00:02:24.801 TEST_HEADER include/spdk/string.h 00:02:24.801 TEST_HEADER include/spdk/stdinc.h 00:02:24.801 TEST_HEADER include/spdk/thread.h 00:02:24.801 TEST_HEADER include/spdk/trace_parser.h 00:02:24.801 TEST_HEADER include/spdk/trace.h 00:02:24.801 TEST_HEADER include/spdk/tree.h 00:02:24.801 TEST_HEADER include/spdk/ublk.h 00:02:24.801 TEST_HEADER include/spdk/util.h 00:02:24.801 TEST_HEADER include/spdk/uuid.h 00:02:24.801 TEST_HEADER include/spdk/version.h 00:02:24.801 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:24.801 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:24.801 TEST_HEADER include/spdk/vhost.h 00:02:24.801 TEST_HEADER include/spdk/vmd.h 00:02:24.801 TEST_HEADER include/spdk/xor.h 00:02:24.801 TEST_HEADER include/spdk/zipf.h 00:02:24.801 CXX test/cpp_headers/accel.o 00:02:24.801 CXX test/cpp_headers/accel_module.o 00:02:24.801 CXX test/cpp_headers/assert.o 00:02:24.801 CXX test/cpp_headers/barrier.o 00:02:24.801 CXX test/cpp_headers/base64.o 00:02:24.801 CXX test/cpp_headers/bdev.o 00:02:24.801 CXX test/cpp_headers/bdev_module.o 00:02:24.801 CXX test/cpp_headers/bit_array.o 00:02:24.801 CXX test/cpp_headers/bdev_zone.o 00:02:24.801 CXX test/cpp_headers/blob_bdev.o 00:02:24.801 CXX test/cpp_headers/bit_pool.o 00:02:24.801 CXX test/cpp_headers/blobfs_bdev.o 00:02:24.801 CXX test/cpp_headers/blobfs.o 00:02:24.801 CXX test/cpp_headers/blob.o 00:02:24.801 CXX test/cpp_headers/config.o 00:02:24.801 CC test/env/vtophys/vtophys.o 00:02:24.801 CXX test/cpp_headers/cpuset.o 00:02:24.801 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:24.801 CXX test/cpp_headers/conf.o 00:02:24.801 CXX test/cpp_headers/crc16.o 00:02:24.801 CXX test/cpp_headers/crc32.o 00:02:24.801 CC app/fio/nvme/fio_plugin.o 00:02:24.801 CXX test/cpp_headers/crc64.o 00:02:24.801 CXX test/cpp_headers/dif.o 00:02:24.801 CXX test/cpp_headers/dma.o 00:02:24.801 CC test/app/stub/stub.o 00:02:24.801 CXX test/cpp_headers/endian.o 00:02:24.801 CXX test/cpp_headers/env_dpdk.o 00:02:24.801 CXX test/cpp_headers/event.o 00:02:24.801 CC test/app/jsoncat/jsoncat.o 00:02:24.801 CXX test/cpp_headers/fd_group.o 00:02:24.801 CC examples/nvme/arbitration/arbitration.o 00:02:24.801 CXX test/cpp_headers/env.o 00:02:24.801 CC test/env/memory/memory_ut.o 00:02:24.801 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:24.801 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:24.801 CXX test/cpp_headers/fd.o 00:02:24.801 CXX test/cpp_headers/file.o 00:02:24.801 CC examples/ioat/perf/perf.o 00:02:24.801 CC examples/nvme/abort/abort.o 00:02:24.801 CXX test/cpp_headers/gpt_spec.o 00:02:24.801 CXX test/cpp_headers/hexlify.o 00:02:24.801 CXX test/cpp_headers/idxd.o 00:02:24.801 CXX test/cpp_headers/ftl.o 00:02:24.801 CC test/app/histogram_perf/histogram_perf.o 00:02:24.801 CC examples/ioat/verify/verify.o 00:02:24.801 CXX test/cpp_headers/histogram_data.o 00:02:24.801 CXX test/cpp_headers/ioat.o 00:02:24.801 CXX test/cpp_headers/idxd_spec.o 00:02:24.801 CXX test/cpp_headers/init.o 00:02:24.801 CXX test/cpp_headers/ioat_spec.o 00:02:24.801 CC test/env/pci/pci_ut.o 00:02:24.801 CXX test/cpp_headers/iscsi_spec.o 00:02:24.801 CXX test/cpp_headers/json.o 00:02:24.801 CC examples/nvme/reconnect/reconnect.o 00:02:24.801 CXX test/cpp_headers/jsonrpc.o 00:02:24.801 CC test/event/reactor/reactor.o 00:02:24.801 CC test/event/event_perf/event_perf.o 00:02:24.801 CC test/thread/poller_perf/poller_perf.o 00:02:24.801 CXX test/cpp_headers/log.o 00:02:24.801 CXX test/cpp_headers/likely.o 00:02:24.801 CC examples/nvme/hello_world/hello_world.o 00:02:24.801 CXX test/cpp_headers/lvol.o 00:02:24.801 CC examples/idxd/perf/perf.o 00:02:24.801 CXX test/cpp_headers/memory.o 00:02:24.801 CXX test/cpp_headers/mmio.o 00:02:24.801 CXX test/cpp_headers/nvme.o 00:02:24.801 CXX test/cpp_headers/notify.o 00:02:24.801 CXX test/cpp_headers/nbd.o 00:02:24.801 CC test/nvme/sgl/sgl.o 00:02:24.801 CC test/nvme/connect_stress/connect_stress.o 00:02:24.801 CXX test/cpp_headers/nvme_intel.o 00:02:24.801 CC test/nvme/reserve/reserve.o 00:02:24.801 CC examples/nvme/hotplug/hotplug.o 00:02:24.801 CC examples/sock/hello_world/hello_sock.o 00:02:24.801 CXX test/cpp_headers/nvme_ocssd.o 00:02:24.801 CC test/nvme/aer/aer.o 00:02:24.801 CC test/event/reactor_perf/reactor_perf.o 00:02:24.801 CC test/nvme/fdp/fdp.o 00:02:24.801 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:24.801 CXX test/cpp_headers/nvme_spec.o 00:02:24.801 CXX test/cpp_headers/nvme_zns.o 00:02:24.801 CXX test/cpp_headers/nvmf_cmd.o 00:02:24.801 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:24.801 CC examples/accel/perf/accel_perf.o 00:02:24.801 CC test/nvme/reset/reset.o 00:02:24.801 CC examples/vmd/lsvmd/lsvmd.o 00:02:24.801 CXX test/cpp_headers/nvmf.o 00:02:24.801 CC test/nvme/simple_copy/simple_copy.o 00:02:24.801 CC test/nvme/startup/startup.o 00:02:24.801 CC test/nvme/overhead/overhead.o 00:02:24.801 CC test/nvme/fused_ordering/fused_ordering.o 00:02:24.801 CXX test/cpp_headers/nvmf_spec.o 00:02:24.801 CC test/nvme/err_injection/err_injection.o 00:02:24.801 CXX test/cpp_headers/nvmf_transport.o 00:02:24.801 CC test/nvme/e2edp/nvme_dp.o 00:02:24.801 CC test/nvme/compliance/nvme_compliance.o 00:02:24.801 CXX test/cpp_headers/opal.o 00:02:24.801 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:24.801 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:24.801 CXX test/cpp_headers/pci_ids.o 00:02:24.801 CXX test/cpp_headers/opal_spec.o 00:02:24.801 CXX test/cpp_headers/pipe.o 00:02:24.801 CC test/event/app_repeat/app_repeat.o 00:02:24.801 CC app/fio/bdev/fio_plugin.o 00:02:24.801 CC test/app/bdev_svc/bdev_svc.o 00:02:24.801 CXX test/cpp_headers/queue.o 00:02:24.801 CXX test/cpp_headers/reduce.o 00:02:24.801 CC test/accel/dif/dif.o 00:02:24.801 CC test/nvme/cuse/cuse.o 00:02:24.801 CXX test/cpp_headers/rpc.o 00:02:24.801 CC examples/vmd/led/led.o 00:02:24.801 CC examples/blob/cli/blobcli.o 00:02:24.801 CXX test/cpp_headers/scheduler.o 00:02:24.801 CC test/dma/test_dma/test_dma.o 00:02:24.801 CC examples/util/zipf/zipf.o 00:02:24.801 CXX test/cpp_headers/scsi.o 00:02:24.801 CC test/bdev/bdevio/bdevio.o 00:02:24.801 CC test/nvme/boot_partition/boot_partition.o 00:02:24.801 CC examples/blob/hello_world/hello_blob.o 00:02:24.801 CC test/event/scheduler/scheduler.o 00:02:24.801 CC examples/nvmf/nvmf/nvmf.o 00:02:24.801 CC examples/bdev/hello_world/hello_bdev.o 00:02:25.068 CC examples/thread/thread/thread_ex.o 00:02:25.068 CXX test/cpp_headers/scsi_spec.o 00:02:25.068 CC test/blobfs/mkfs/mkfs.o 00:02:25.068 CC examples/bdev/bdevperf/bdevperf.o 00:02:25.068 CXX test/cpp_headers/sock.o 00:02:25.068 LINK spdk_lspci 00:02:25.068 CC test/env/mem_callbacks/mem_callbacks.o 00:02:25.068 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:25.068 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:25.068 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:25.068 CC test/lvol/esnap/esnap.o 00:02:25.068 LINK spdk_nvme_discover 00:02:25.068 LINK nvmf_tgt 00:02:25.335 LINK interrupt_tgt 00:02:25.335 LINK rpc_client_test 00:02:25.335 LINK iscsi_tgt 00:02:25.335 LINK spdk_trace_record 00:02:25.335 LINK vhost 00:02:25.335 LINK vtophys 00:02:25.335 LINK jsoncat 00:02:25.335 LINK histogram_perf 00:02:25.335 LINK lsvmd 00:02:25.335 LINK spdk_tgt 00:02:25.335 LINK event_perf 00:02:25.335 LINK poller_perf 00:02:25.335 LINK cmb_copy 00:02:25.335 LINK env_dpdk_post_init 00:02:25.335 LINK app_repeat 00:02:25.335 LINK reactor_perf 00:02:25.335 LINK zipf 00:02:25.335 LINK reactor 00:02:25.335 LINK led 00:02:25.335 LINK pmr_persistence 00:02:25.599 LINK stub 00:02:25.599 LINK bdev_svc 00:02:25.599 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:25.599 LINK startup 00:02:25.599 LINK connect_stress 00:02:25.599 LINK boot_partition 00:02:25.599 LINK fused_ordering 00:02:25.599 LINK doorbell_aers 00:02:25.599 CXX test/cpp_headers/stdinc.o 00:02:25.599 LINK simple_copy 00:02:25.599 LINK err_injection 00:02:25.599 LINK hello_world 00:02:25.599 LINK reserve 00:02:25.599 LINK ioat_perf 00:02:25.599 LINK hello_sock 00:02:25.599 LINK reset 00:02:25.599 CXX test/cpp_headers/string.o 00:02:25.599 CXX test/cpp_headers/thread.o 00:02:25.599 LINK hotplug 00:02:25.599 LINK verify 00:02:25.599 LINK sgl 00:02:25.599 CXX test/cpp_headers/trace.o 00:02:25.599 LINK spdk_dd 00:02:25.599 LINK mkfs 00:02:25.599 CXX test/cpp_headers/trace_parser.o 00:02:25.599 LINK overhead 00:02:25.599 CXX test/cpp_headers/tree.o 00:02:25.599 CXX test/cpp_headers/ublk.o 00:02:25.599 CXX test/cpp_headers/util.o 00:02:25.599 LINK nvme_dp 00:02:25.599 CXX test/cpp_headers/uuid.o 00:02:25.599 CXX test/cpp_headers/version.o 00:02:25.599 CXX test/cpp_headers/vfio_user_pci.o 00:02:25.599 CXX test/cpp_headers/vfio_user_spec.o 00:02:25.599 CXX test/cpp_headers/vhost.o 00:02:25.599 CXX test/cpp_headers/vmd.o 00:02:25.599 LINK scheduler 00:02:25.599 LINK hello_blob 00:02:25.599 CXX test/cpp_headers/xor.o 00:02:25.599 CXX test/cpp_headers/zipf.o 00:02:25.599 LINK hello_bdev 00:02:25.599 LINK thread 00:02:25.599 LINK fdp 00:02:25.599 LINK aer 00:02:25.599 LINK nvmf 00:02:25.599 LINK nvme_compliance 00:02:25.599 LINK arbitration 00:02:25.599 LINK reconnect 00:02:25.599 LINK idxd_perf 00:02:25.599 LINK spdk_trace 00:02:25.860 LINK dif 00:02:25.860 LINK bdevio 00:02:25.860 LINK abort 00:02:25.860 LINK test_dma 00:02:25.860 LINK pci_ut 00:02:25.860 LINK nvme_fuzz 00:02:25.860 LINK blobcli 00:02:25.860 LINK spdk_bdev 00:02:25.860 LINK spdk_nvme 00:02:25.860 LINK accel_perf 00:02:25.860 LINK nvme_manage 00:02:25.860 LINK mem_callbacks 00:02:26.123 LINK vhost_fuzz 00:02:26.123 LINK spdk_top 00:02:26.123 LINK spdk_nvme_identify 00:02:26.123 LINK spdk_nvme_perf 00:02:26.123 LINK bdevperf 00:02:26.123 LINK memory_ut 00:02:26.384 LINK cuse 00:02:26.954 LINK iscsi_fuzz 00:02:28.867 LINK esnap 00:02:29.129 00:02:29.129 real 0m45.643s 00:02:29.129 user 6m16.338s 00:02:29.129 sys 3m59.851s 00:02:29.129 07:54:59 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:29.129 07:54:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.129 ************************************ 00:02:29.129 END TEST make 00:02:29.129 ************************************ 00:02:29.129 07:54:59 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:29.129 07:54:59 -- nvmf/common.sh@7 -- # uname -s 00:02:29.129 07:54:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:29.129 07:54:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:29.129 07:54:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:29.129 07:54:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:29.129 07:54:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:29.129 07:54:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:29.129 07:54:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:29.129 07:54:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:29.129 07:54:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:29.129 07:54:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:29.129 07:54:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:29.129 07:54:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:29.129 07:54:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:29.129 07:54:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:29.129 07:54:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:29.129 07:54:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:29.129 07:54:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:29.129 07:54:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:29.129 07:54:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:29.129 07:54:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.129 07:54:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.129 07:54:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.129 07:54:59 -- paths/export.sh@5 -- # export PATH 00:02:29.129 07:54:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.129 07:54:59 -- nvmf/common.sh@46 -- # : 0 00:02:29.129 07:54:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:29.129 07:54:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:29.129 07:54:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:29.129 07:54:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:29.129 07:54:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:29.129 07:54:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:29.129 07:54:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:29.129 07:54:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:29.129 07:54:59 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:29.129 07:54:59 -- spdk/autotest.sh@32 -- # uname -s 00:02:29.129 07:54:59 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:29.129 07:54:59 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:29.129 07:54:59 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:29.129 07:54:59 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:29.129 07:54:59 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:29.129 07:54:59 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:29.129 07:54:59 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:29.129 07:54:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:29.129 07:54:59 -- spdk/autotest.sh@48 -- # udevadm_pid=773728 00:02:29.129 07:54:59 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:29.129 07:54:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:29.129 07:54:59 -- spdk/autotest.sh@54 -- # echo 773730 00:02:29.129 07:54:59 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:29.129 07:54:59 -- spdk/autotest.sh@56 -- # echo 773731 00:02:29.129 07:54:59 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:29.129 07:54:59 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:29.129 07:54:59 -- spdk/autotest.sh@60 -- # echo 773732 00:02:29.129 07:54:59 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:29.129 07:54:59 -- spdk/autotest.sh@62 -- # echo 773733 00:02:29.129 07:54:59 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:29.130 07:54:59 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:29.130 07:54:59 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:29.130 07:54:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:29.130 07:54:59 -- common/autotest_common.sh@10 -- # set +x 00:02:29.130 07:54:59 -- spdk/autotest.sh@70 -- # create_test_list 00:02:29.130 07:54:59 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:29.130 07:54:59 -- common/autotest_common.sh@10 -- # set +x 00:02:29.130 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:29.392 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:29.392 07:54:59 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:29.392 07:54:59 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:29.392 07:54:59 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:29.392 07:54:59 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:29.392 07:54:59 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:29.392 07:54:59 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:29.392 07:54:59 -- common/autotest_common.sh@1440 -- # uname 00:02:29.392 07:54:59 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:29.392 07:54:59 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:29.392 07:54:59 -- common/autotest_common.sh@1460 -- # uname 00:02:29.392 07:54:59 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:29.392 07:54:59 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:29.392 07:54:59 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:29.392 07:54:59 -- spdk/autotest.sh@83 -- # hash lcov 00:02:29.392 07:54:59 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:29.392 07:54:59 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:29.392 --rc lcov_branch_coverage=1 00:02:29.392 --rc lcov_function_coverage=1 00:02:29.392 --rc genhtml_branch_coverage=1 00:02:29.392 --rc genhtml_function_coverage=1 00:02:29.392 --rc genhtml_legend=1 00:02:29.392 --rc geninfo_all_blocks=1 00:02:29.392 ' 00:02:29.392 07:54:59 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:29.392 --rc lcov_branch_coverage=1 00:02:29.392 --rc lcov_function_coverage=1 00:02:29.392 --rc genhtml_branch_coverage=1 00:02:29.392 --rc genhtml_function_coverage=1 00:02:29.392 --rc genhtml_legend=1 00:02:29.392 --rc geninfo_all_blocks=1 00:02:29.392 ' 00:02:29.392 07:54:59 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:29.392 --rc lcov_branch_coverage=1 00:02:29.392 --rc lcov_function_coverage=1 00:02:29.392 --rc genhtml_branch_coverage=1 00:02:29.392 --rc genhtml_function_coverage=1 00:02:29.392 --rc genhtml_legend=1 00:02:29.392 --rc geninfo_all_blocks=1 00:02:29.392 --no-external' 00:02:29.392 07:54:59 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:29.392 --rc lcov_branch_coverage=1 00:02:29.392 --rc lcov_function_coverage=1 00:02:29.392 --rc genhtml_branch_coverage=1 00:02:29.392 --rc genhtml_function_coverage=1 00:02:29.392 --rc genhtml_legend=1 00:02:29.392 --rc geninfo_all_blocks=1 00:02:29.392 --no-external' 00:02:29.392 07:54:59 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:29.392 lcov: LCOV version 1.14 00:02:29.392 07:54:59 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:41.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:41.636 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:41.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:41.636 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:41.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:41.636 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:53.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:53.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:54.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:54.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:54.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:54.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:54.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:54.400 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:54.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:54.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:54.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:54.662 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:54.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:54.662 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:54.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:54.662 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:54.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:54.662 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:54.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:54.662 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:54.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:54.662 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:54.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:54.662 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:54.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:54.662 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:54.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:54.662 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:54.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:54.662 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:54.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:54.662 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:54.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:54.662 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:54.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:54.662 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:54.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:54.662 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:54.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:54.662 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:54.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:54.663 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:54.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:54.663 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:54.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:54.663 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:54.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:54.663 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:54.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:54.663 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:54.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:54.663 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:54.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:54.663 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:54.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:54.663 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:54.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:54.663 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:54.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:54.663 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:54.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:54.663 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:54.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:54.663 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:56.579 07:55:26 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:56.579 07:55:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:56.579 07:55:26 -- common/autotest_common.sh@10 -- # set +x 00:02:56.579 07:55:26 -- spdk/autotest.sh@102 -- # rm -f 00:02:56.579 07:55:26 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:59.884 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:59.884 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:59.884 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:59.884 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:59.884 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:59.884 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:59.884 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:00.145 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:00.145 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:00.145 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:00.145 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:00.145 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:00.145 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:00.145 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:00.145 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:00.145 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:00.145 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:00.145 07:55:30 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:00.145 07:55:30 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:00.145 07:55:30 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:00.145 07:55:30 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:00.145 07:55:30 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:00.145 07:55:30 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:00.145 07:55:30 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:00.145 07:55:30 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:00.145 07:55:30 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:00.145 07:55:30 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:00.145 07:55:30 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:03:00.145 07:55:30 -- spdk/autotest.sh@121 -- # grep -v p 00:03:00.146 07:55:30 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:00.146 07:55:30 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:00.146 07:55:30 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:00.146 07:55:30 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:00.146 07:55:30 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:00.408 No valid GPT data, bailing 00:03:00.408 07:55:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:00.408 07:55:30 -- scripts/common.sh@393 -- # pt= 00:03:00.408 07:55:30 -- scripts/common.sh@394 -- # return 1 00:03:00.408 07:55:30 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:00.408 1+0 records in 00:03:00.408 1+0 records out 00:03:00.408 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00126811 s, 827 MB/s 00:03:00.408 07:55:30 -- spdk/autotest.sh@129 -- # sync 00:03:00.408 07:55:30 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:00.408 07:55:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:00.408 07:55:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:08.559 07:55:38 -- spdk/autotest.sh@135 -- # uname -s 00:03:08.559 07:55:38 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:08.559 07:55:38 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:08.559 07:55:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:08.559 07:55:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:08.559 07:55:38 -- common/autotest_common.sh@10 -- # set +x 00:03:08.559 ************************************ 00:03:08.559 START TEST setup.sh 00:03:08.559 ************************************ 00:03:08.559 07:55:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:08.559 * Looking for test storage... 00:03:08.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:08.560 07:55:38 -- setup/test-setup.sh@10 -- # uname -s 00:03:08.560 07:55:38 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:08.560 07:55:38 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:08.560 07:55:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:08.560 07:55:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:08.560 07:55:38 -- common/autotest_common.sh@10 -- # set +x 00:03:08.560 ************************************ 00:03:08.560 START TEST acl 00:03:08.560 ************************************ 00:03:08.560 07:55:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:08.560 * Looking for test storage... 00:03:08.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:08.560 07:55:38 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:08.560 07:55:38 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:08.560 07:55:38 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:08.560 07:55:38 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:08.560 07:55:38 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:08.560 07:55:38 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:08.560 07:55:38 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:08.560 07:55:38 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:08.560 07:55:38 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:08.560 07:55:38 -- setup/acl.sh@12 -- # devs=() 00:03:08.560 07:55:38 -- setup/acl.sh@12 -- # declare -a devs 00:03:08.560 07:55:38 -- setup/acl.sh@13 -- # drivers=() 00:03:08.560 07:55:38 -- setup/acl.sh@13 -- # declare -A drivers 00:03:08.560 07:55:38 -- setup/acl.sh@51 -- # setup reset 00:03:08.560 07:55:38 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:08.560 07:55:38 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:12.776 07:55:42 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:12.776 07:55:42 -- setup/acl.sh@16 -- # local dev driver 00:03:12.776 07:55:42 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.776 07:55:42 -- setup/acl.sh@15 -- # setup output status 00:03:12.776 07:55:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.776 07:55:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:15.328 Hugepages 00:03:15.328 node hugesize free / total 00:03:15.328 07:55:45 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:15.328 07:55:45 -- setup/acl.sh@19 -- # continue 00:03:15.328 07:55:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.328 07:55:45 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:15.328 07:55:45 -- setup/acl.sh@19 -- # continue 00:03:15.328 07:55:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.328 07:55:45 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:15.328 07:55:45 -- setup/acl.sh@19 -- # continue 00:03:15.328 07:55:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.328 00:03:15.328 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:15.328 07:55:45 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:15.328 07:55:45 -- setup/acl.sh@19 -- # continue 00:03:15.328 07:55:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.328 07:55:45 -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:15.328 07:55:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.328 07:55:45 -- setup/acl.sh@20 -- # continue 00:03:15.328 07:55:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.328 07:55:45 -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:15.328 07:55:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.328 07:55:45 -- setup/acl.sh@20 -- # continue 00:03:15.328 07:55:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.328 07:55:45 -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:15.328 07:55:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.328 07:55:45 -- setup/acl.sh@20 -- # continue 00:03:15.328 07:55:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.328 07:55:45 -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:15.328 07:55:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.328 07:55:45 -- setup/acl.sh@20 -- # continue 00:03:15.328 07:55:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.328 07:55:45 -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:15.328 07:55:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.328 07:55:45 -- setup/acl.sh@20 -- # continue 00:03:15.328 07:55:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.328 07:55:45 -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:15.328 07:55:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.328 07:55:45 -- setup/acl.sh@20 -- # continue 00:03:15.328 07:55:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.328 07:55:45 -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:15.328 07:55:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.328 07:55:45 -- setup/acl.sh@20 -- # continue 00:03:15.328 07:55:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.328 07:55:45 -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:15.328 07:55:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.328 07:55:45 -- setup/acl.sh@20 -- # continue 00:03:15.328 07:55:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.590 07:55:46 -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:15.590 07:55:46 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:15.590 07:55:46 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:15.590 07:55:46 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:15.590 07:55:46 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:15.590 07:55:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.590 07:55:46 -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:15.590 07:55:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.590 07:55:46 -- setup/acl.sh@20 -- # continue 00:03:15.590 07:55:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.590 07:55:46 -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:15.590 07:55:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.590 07:55:46 -- setup/acl.sh@20 -- # continue 00:03:15.590 07:55:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.590 07:55:46 -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:15.590 07:55:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.590 07:55:46 -- setup/acl.sh@20 -- # continue 00:03:15.590 07:55:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.590 07:55:46 -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:15.590 07:55:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.590 07:55:46 -- setup/acl.sh@20 -- # continue 00:03:15.590 07:55:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.590 07:55:46 -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:15.590 07:55:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.590 07:55:46 -- setup/acl.sh@20 -- # continue 00:03:15.590 07:55:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.590 07:55:46 -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:15.590 07:55:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.590 07:55:46 -- setup/acl.sh@20 -- # continue 00:03:15.590 07:55:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.590 07:55:46 -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:15.590 07:55:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.590 07:55:46 -- setup/acl.sh@20 -- # continue 00:03:15.590 07:55:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.590 07:55:46 -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:15.590 07:55:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.590 07:55:46 -- setup/acl.sh@20 -- # continue 00:03:15.590 07:55:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.590 07:55:46 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:15.590 07:55:46 -- setup/acl.sh@54 -- # run_test denied denied 00:03:15.590 07:55:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:15.590 07:55:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:15.590 07:55:46 -- common/autotest_common.sh@10 -- # set +x 00:03:15.590 ************************************ 00:03:15.590 START TEST denied 00:03:15.590 ************************************ 00:03:15.590 07:55:46 -- common/autotest_common.sh@1104 -- # denied 00:03:15.590 07:55:46 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:15.590 07:55:46 -- setup/acl.sh@38 -- # setup output config 00:03:15.590 07:55:46 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:15.590 07:55:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.590 07:55:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:19.803 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:19.803 07:55:49 -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:19.803 07:55:49 -- setup/acl.sh@28 -- # local dev driver 00:03:19.803 07:55:49 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:19.803 07:55:49 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:19.803 07:55:49 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:19.803 07:55:49 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:19.803 07:55:49 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:19.803 07:55:49 -- setup/acl.sh@41 -- # setup reset 00:03:19.803 07:55:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:19.803 07:55:49 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.017 00:03:24.017 real 0m8.262s 00:03:24.017 user 0m2.842s 00:03:24.017 sys 0m4.759s 00:03:24.017 07:55:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.017 07:55:54 -- common/autotest_common.sh@10 -- # set +x 00:03:24.017 ************************************ 00:03:24.017 END TEST denied 00:03:24.017 ************************************ 00:03:24.017 07:55:54 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:24.017 07:55:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:24.017 07:55:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:24.017 07:55:54 -- common/autotest_common.sh@10 -- # set +x 00:03:24.017 ************************************ 00:03:24.017 START TEST allowed 00:03:24.017 ************************************ 00:03:24.017 07:55:54 -- common/autotest_common.sh@1104 -- # allowed 00:03:24.017 07:55:54 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:24.017 07:55:54 -- setup/acl.sh@45 -- # setup output config 00:03:24.017 07:55:54 -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:24.017 07:55:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.017 07:55:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:29.314 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:29.314 07:55:59 -- setup/acl.sh@47 -- # verify 00:03:29.314 07:55:59 -- setup/acl.sh@28 -- # local dev driver 00:03:29.314 07:55:59 -- setup/acl.sh@48 -- # setup reset 00:03:29.314 07:55:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.314 07:55:59 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:33.530 00:03:33.530 real 0m9.169s 00:03:33.530 user 0m2.735s 00:03:33.530 sys 0m4.770s 00:03:33.530 07:56:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.530 07:56:03 -- common/autotest_common.sh@10 -- # set +x 00:03:33.530 ************************************ 00:03:33.530 END TEST allowed 00:03:33.530 ************************************ 00:03:33.530 00:03:33.530 real 0m24.871s 00:03:33.530 user 0m8.392s 00:03:33.530 sys 0m14.362s 00:03:33.530 07:56:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.530 07:56:03 -- common/autotest_common.sh@10 -- # set +x 00:03:33.530 ************************************ 00:03:33.530 END TEST acl 00:03:33.530 ************************************ 00:03:33.530 07:56:03 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:33.530 07:56:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:33.530 07:56:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:33.530 07:56:03 -- common/autotest_common.sh@10 -- # set +x 00:03:33.530 ************************************ 00:03:33.530 START TEST hugepages 00:03:33.530 ************************************ 00:03:33.530 07:56:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:33.530 * Looking for test storage... 00:03:33.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:33.530 07:56:03 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:33.530 07:56:03 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:33.530 07:56:03 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:33.530 07:56:03 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:33.530 07:56:03 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:33.530 07:56:03 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:33.530 07:56:03 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:33.530 07:56:03 -- setup/common.sh@18 -- # local node= 00:03:33.530 07:56:03 -- setup/common.sh@19 -- # local var val 00:03:33.530 07:56:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:33.530 07:56:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.530 07:56:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.530 07:56:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.530 07:56:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.530 07:56:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.530 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.530 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.530 07:56:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 108657260 kB' 'MemAvailable: 111851712 kB' 'Buffers: 4132 kB' 'Cached: 9186448 kB' 'SwapCached: 0 kB' 'Active: 6244648 kB' 'Inactive: 3507332 kB' 'Active(anon): 5856392 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564844 kB' 'Mapped: 242016 kB' 'Shmem: 5294992 kB' 'KReclaimable: 246388 kB' 'Slab: 876428 kB' 'SReclaimable: 246388 kB' 'SUnreclaim: 630040 kB' 'KernelStack: 27408 kB' 'PageTables: 9616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460868 kB' 'Committed_AS: 7470444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234692 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:33.530 07:56:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.530 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.530 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.530 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.530 07:56:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.530 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.530 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.530 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.530 07:56:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.530 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.530 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.530 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.530 07:56:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.530 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.530 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.530 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.530 07:56:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.530 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.530 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.530 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.530 07:56:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.531 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.531 07:56:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.532 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.532 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.532 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.532 07:56:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.532 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.532 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.532 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.532 07:56:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.532 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.532 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.532 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.532 07:56:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.532 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.532 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.532 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.532 07:56:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.532 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.532 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.532 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.532 07:56:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.532 07:56:03 -- setup/common.sh@32 -- # continue 00:03:33.532 07:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:33.532 07:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:33.532 07:56:03 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:33.532 07:56:03 -- setup/common.sh@33 -- # echo 2048 00:03:33.532 07:56:03 -- setup/common.sh@33 -- # return 0 00:03:33.532 07:56:03 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:33.532 07:56:03 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:33.532 07:56:03 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:33.532 07:56:03 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:33.532 07:56:03 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:33.532 07:56:03 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:33.532 07:56:03 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:33.532 07:56:03 -- setup/hugepages.sh@207 -- # get_nodes 00:03:33.532 07:56:03 -- setup/hugepages.sh@27 -- # local node 00:03:33.532 07:56:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.532 07:56:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:33.532 07:56:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.532 07:56:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:33.532 07:56:03 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:33.532 07:56:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.532 07:56:03 -- setup/hugepages.sh@208 -- # clear_hp 00:03:33.532 07:56:03 -- setup/hugepages.sh@37 -- # local node hp 00:03:33.532 07:56:03 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:33.532 07:56:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.532 07:56:03 -- setup/hugepages.sh@41 -- # echo 0 00:03:33.532 07:56:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.532 07:56:03 -- setup/hugepages.sh@41 -- # echo 0 00:03:33.532 07:56:03 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:33.532 07:56:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.532 07:56:03 -- setup/hugepages.sh@41 -- # echo 0 00:03:33.532 07:56:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.532 07:56:03 -- setup/hugepages.sh@41 -- # echo 0 00:03:33.532 07:56:03 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:33.532 07:56:03 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:33.532 07:56:03 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:33.532 07:56:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:33.532 07:56:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:33.532 07:56:03 -- common/autotest_common.sh@10 -- # set +x 00:03:33.532 ************************************ 00:03:33.532 START TEST default_setup 00:03:33.532 ************************************ 00:03:33.532 07:56:03 -- common/autotest_common.sh@1104 -- # default_setup 00:03:33.532 07:56:03 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:33.532 07:56:03 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:33.532 07:56:03 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:33.532 07:56:03 -- setup/hugepages.sh@51 -- # shift 00:03:33.532 07:56:03 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:33.532 07:56:03 -- setup/hugepages.sh@52 -- # local node_ids 00:03:33.532 07:56:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:33.532 07:56:03 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:33.532 07:56:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:33.532 07:56:03 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:33.532 07:56:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:33.532 07:56:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:33.532 07:56:03 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:33.532 07:56:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:33.532 07:56:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:33.532 07:56:03 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:33.532 07:56:03 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:33.532 07:56:03 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:33.532 07:56:03 -- setup/hugepages.sh@73 -- # return 0 00:03:33.532 07:56:03 -- setup/hugepages.sh@137 -- # setup output 00:03:33.532 07:56:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.532 07:56:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:36.909 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:36.909 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:36.909 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:36.909 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:36.909 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:36.909 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:36.909 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:36.909 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:36.909 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:36.909 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:36.909 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:36.909 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:36.909 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:36.909 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:36.909 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:36.909 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:36.909 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:36.909 07:56:07 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:36.909 07:56:07 -- setup/hugepages.sh@89 -- # local node 00:03:36.909 07:56:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.909 07:56:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.909 07:56:07 -- setup/hugepages.sh@92 -- # local surp 00:03:36.909 07:56:07 -- setup/hugepages.sh@93 -- # local resv 00:03:36.909 07:56:07 -- setup/hugepages.sh@94 -- # local anon 00:03:36.909 07:56:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.909 07:56:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.909 07:56:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.909 07:56:07 -- setup/common.sh@18 -- # local node= 00:03:36.909 07:56:07 -- setup/common.sh@19 -- # local var val 00:03:36.909 07:56:07 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.909 07:56:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.909 07:56:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.909 07:56:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.909 07:56:07 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.909 07:56:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.909 07:56:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110826440 kB' 'MemAvailable: 114020732 kB' 'Buffers: 4132 kB' 'Cached: 9186572 kB' 'SwapCached: 0 kB' 'Active: 6264444 kB' 'Inactive: 3507332 kB' 'Active(anon): 5876188 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584272 kB' 'Mapped: 242316 kB' 'Shmem: 5295116 kB' 'KReclaimable: 246068 kB' 'Slab: 873952 kB' 'SReclaimable: 246068 kB' 'SUnreclaim: 627884 kB' 'KernelStack: 27280 kB' 'PageTables: 8944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7486996 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234516 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.909 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.909 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.910 07:56:07 -- setup/common.sh@33 -- # echo 0 00:03:36.910 07:56:07 -- setup/common.sh@33 -- # return 0 00:03:36.910 07:56:07 -- setup/hugepages.sh@97 -- # anon=0 00:03:36.910 07:56:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.910 07:56:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.910 07:56:07 -- setup/common.sh@18 -- # local node= 00:03:36.910 07:56:07 -- setup/common.sh@19 -- # local var val 00:03:36.910 07:56:07 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.910 07:56:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.910 07:56:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.910 07:56:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.910 07:56:07 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.910 07:56:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110826040 kB' 'MemAvailable: 114020332 kB' 'Buffers: 4132 kB' 'Cached: 9186576 kB' 'SwapCached: 0 kB' 'Active: 6264124 kB' 'Inactive: 3507332 kB' 'Active(anon): 5875868 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583956 kB' 'Mapped: 242304 kB' 'Shmem: 5295120 kB' 'KReclaimable: 246068 kB' 'Slab: 873944 kB' 'SReclaimable: 246068 kB' 'SUnreclaim: 627876 kB' 'KernelStack: 27248 kB' 'PageTables: 8832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7487008 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234500 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.910 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.910 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.911 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.911 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.912 07:56:07 -- setup/common.sh@33 -- # echo 0 00:03:36.912 07:56:07 -- setup/common.sh@33 -- # return 0 00:03:36.912 07:56:07 -- setup/hugepages.sh@99 -- # surp=0 00:03:36.912 07:56:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.912 07:56:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.912 07:56:07 -- setup/common.sh@18 -- # local node= 00:03:36.912 07:56:07 -- setup/common.sh@19 -- # local var val 00:03:36.912 07:56:07 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.912 07:56:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.912 07:56:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.912 07:56:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.912 07:56:07 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.912 07:56:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.912 07:56:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110828508 kB' 'MemAvailable: 114022800 kB' 'Buffers: 4132 kB' 'Cached: 9186588 kB' 'SwapCached: 0 kB' 'Active: 6263460 kB' 'Inactive: 3507332 kB' 'Active(anon): 5875204 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583772 kB' 'Mapped: 242212 kB' 'Shmem: 5295132 kB' 'KReclaimable: 246068 kB' 'Slab: 873912 kB' 'SReclaimable: 246068 kB' 'SUnreclaim: 627844 kB' 'KernelStack: 27264 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7487040 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234500 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.912 07:56:07 -- setup/common.sh@32 -- # continue 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.912 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.205 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.205 07:56:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.205 07:56:07 -- setup/common.sh@33 -- # echo 0 00:03:37.206 07:56:07 -- setup/common.sh@33 -- # return 0 00:03:37.206 07:56:07 -- setup/hugepages.sh@100 -- # resv=0 00:03:37.206 07:56:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:37.206 nr_hugepages=1024 00:03:37.206 07:56:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:37.206 resv_hugepages=0 00:03:37.206 07:56:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:37.206 surplus_hugepages=0 00:03:37.206 07:56:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:37.206 anon_hugepages=0 00:03:37.206 07:56:07 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.206 07:56:07 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:37.206 07:56:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:37.206 07:56:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:37.206 07:56:07 -- setup/common.sh@18 -- # local node= 00:03:37.206 07:56:07 -- setup/common.sh@19 -- # local var val 00:03:37.206 07:56:07 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.206 07:56:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.206 07:56:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.206 07:56:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.206 07:56:07 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.206 07:56:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110828572 kB' 'MemAvailable: 114022864 kB' 'Buffers: 4132 kB' 'Cached: 9186600 kB' 'SwapCached: 0 kB' 'Active: 6263432 kB' 'Inactive: 3507332 kB' 'Active(anon): 5875176 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583652 kB' 'Mapped: 242212 kB' 'Shmem: 5295144 kB' 'KReclaimable: 246068 kB' 'Slab: 873912 kB' 'SReclaimable: 246068 kB' 'SUnreclaim: 627844 kB' 'KernelStack: 27248 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7487056 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234484 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.206 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.206 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.207 07:56:07 -- setup/common.sh@33 -- # echo 1024 00:03:37.207 07:56:07 -- setup/common.sh@33 -- # return 0 00:03:37.207 07:56:07 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.207 07:56:07 -- setup/hugepages.sh@112 -- # get_nodes 00:03:37.207 07:56:07 -- setup/hugepages.sh@27 -- # local node 00:03:37.207 07:56:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.207 07:56:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:37.207 07:56:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.207 07:56:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:37.207 07:56:07 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:37.207 07:56:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.207 07:56:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.207 07:56:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.207 07:56:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:37.207 07:56:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.207 07:56:07 -- setup/common.sh@18 -- # local node=0 00:03:37.207 07:56:07 -- setup/common.sh@19 -- # local var val 00:03:37.207 07:56:07 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.207 07:56:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.207 07:56:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:37.207 07:56:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:37.207 07:56:07 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.207 07:56:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53946264 kB' 'MemUsed: 11712744 kB' 'SwapCached: 0 kB' 'Active: 4355124 kB' 'Inactive: 3272260 kB' 'Active(anon): 4176348 kB' 'Inactive(anon): 0 kB' 'Active(file): 178776 kB' 'Inactive(file): 3272260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7277284 kB' 'Mapped: 157632 kB' 'AnonPages: 353564 kB' 'Shmem: 3826248 kB' 'KernelStack: 14280 kB' 'PageTables: 5328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98184 kB' 'Slab: 415896 kB' 'SReclaimable: 98184 kB' 'SUnreclaim: 317712 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.207 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.207 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # continue 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.208 07:56:07 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.208 07:56:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.208 07:56:07 -- setup/common.sh@33 -- # echo 0 00:03:37.208 07:56:07 -- setup/common.sh@33 -- # return 0 00:03:37.208 07:56:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.208 07:56:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.208 07:56:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.208 07:56:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.208 07:56:07 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:37.208 node0=1024 expecting 1024 00:03:37.208 07:56:07 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:37.208 00:03:37.208 real 0m3.807s 00:03:37.208 user 0m1.448s 00:03:37.208 sys 0m2.360s 00:03:37.208 07:56:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.208 07:56:07 -- common/autotest_common.sh@10 -- # set +x 00:03:37.208 ************************************ 00:03:37.208 END TEST default_setup 00:03:37.208 ************************************ 00:03:37.208 07:56:07 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:37.208 07:56:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:37.208 07:56:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:37.208 07:56:07 -- common/autotest_common.sh@10 -- # set +x 00:03:37.208 ************************************ 00:03:37.208 START TEST per_node_1G_alloc 00:03:37.208 ************************************ 00:03:37.208 07:56:07 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:37.208 07:56:07 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:37.208 07:56:07 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:37.208 07:56:07 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:37.208 07:56:07 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:37.208 07:56:07 -- setup/hugepages.sh@51 -- # shift 00:03:37.208 07:56:07 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:37.208 07:56:07 -- setup/hugepages.sh@52 -- # local node_ids 00:03:37.208 07:56:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:37.208 07:56:07 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:37.208 07:56:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:37.208 07:56:07 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:37.208 07:56:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:37.208 07:56:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:37.208 07:56:07 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:37.208 07:56:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:37.208 07:56:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:37.208 07:56:07 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:37.208 07:56:07 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:37.208 07:56:07 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:37.208 07:56:07 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:37.208 07:56:07 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:37.208 07:56:07 -- setup/hugepages.sh@73 -- # return 0 00:03:37.208 07:56:07 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:37.208 07:56:07 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:37.208 07:56:07 -- setup/hugepages.sh@146 -- # setup output 00:03:37.208 07:56:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.208 07:56:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:40.616 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:40.616 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:40.616 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:40.616 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:40.616 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:40.616 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:40.616 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:40.616 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:40.616 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:40.616 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:40.616 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:40.616 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:40.616 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:40.616 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:40.616 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:40.616 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:40.616 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:40.616 07:56:11 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:40.616 07:56:11 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:40.616 07:56:11 -- setup/hugepages.sh@89 -- # local node 00:03:40.616 07:56:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.616 07:56:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.616 07:56:11 -- setup/hugepages.sh@92 -- # local surp 00:03:40.616 07:56:11 -- setup/hugepages.sh@93 -- # local resv 00:03:40.616 07:56:11 -- setup/hugepages.sh@94 -- # local anon 00:03:40.616 07:56:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.616 07:56:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.616 07:56:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.616 07:56:11 -- setup/common.sh@18 -- # local node= 00:03:40.616 07:56:11 -- setup/common.sh@19 -- # local var val 00:03:40.616 07:56:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.616 07:56:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.616 07:56:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.616 07:56:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.616 07:56:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.616 07:56:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 07:56:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110843060 kB' 'MemAvailable: 114037332 kB' 'Buffers: 4132 kB' 'Cached: 9186712 kB' 'SwapCached: 0 kB' 'Active: 6261692 kB' 'Inactive: 3507332 kB' 'Active(anon): 5873436 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580556 kB' 'Mapped: 241388 kB' 'Shmem: 5295256 kB' 'KReclaimable: 246028 kB' 'Slab: 873976 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 627948 kB' 'KernelStack: 27232 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7472856 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234532 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.616 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.616 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.617 07:56:11 -- setup/common.sh@33 -- # echo 0 00:03:40.617 07:56:11 -- setup/common.sh@33 -- # return 0 00:03:40.617 07:56:11 -- setup/hugepages.sh@97 -- # anon=0 00:03:40.617 07:56:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.617 07:56:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.617 07:56:11 -- setup/common.sh@18 -- # local node= 00:03:40.617 07:56:11 -- setup/common.sh@19 -- # local var val 00:03:40.617 07:56:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.617 07:56:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.617 07:56:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.617 07:56:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.617 07:56:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.617 07:56:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110843692 kB' 'MemAvailable: 114037964 kB' 'Buffers: 4132 kB' 'Cached: 9186712 kB' 'SwapCached: 0 kB' 'Active: 6261268 kB' 'Inactive: 3507332 kB' 'Active(anon): 5873012 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580616 kB' 'Mapped: 241368 kB' 'Shmem: 5295256 kB' 'KReclaimable: 246028 kB' 'Slab: 873976 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 627948 kB' 'KernelStack: 27216 kB' 'PageTables: 8524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7472868 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234500 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.617 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.617 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.618 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.618 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.619 07:56:11 -- setup/common.sh@33 -- # echo 0 00:03:40.619 07:56:11 -- setup/common.sh@33 -- # return 0 00:03:40.619 07:56:11 -- setup/hugepages.sh@99 -- # surp=0 00:03:40.619 07:56:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.619 07:56:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.619 07:56:11 -- setup/common.sh@18 -- # local node= 00:03:40.619 07:56:11 -- setup/common.sh@19 -- # local var val 00:03:40.619 07:56:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.619 07:56:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.619 07:56:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.619 07:56:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.619 07:56:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.619 07:56:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110843888 kB' 'MemAvailable: 114038160 kB' 'Buffers: 4132 kB' 'Cached: 9186724 kB' 'SwapCached: 0 kB' 'Active: 6260660 kB' 'Inactive: 3507332 kB' 'Active(anon): 5872404 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580468 kB' 'Mapped: 241288 kB' 'Shmem: 5295268 kB' 'KReclaimable: 246028 kB' 'Slab: 873948 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 627920 kB' 'KernelStack: 27216 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7472880 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234500 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.619 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.619 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.620 07:56:11 -- setup/common.sh@33 -- # echo 0 00:03:40.620 07:56:11 -- setup/common.sh@33 -- # return 0 00:03:40.620 07:56:11 -- setup/hugepages.sh@100 -- # resv=0 00:03:40.620 07:56:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:40.620 nr_hugepages=1024 00:03:40.620 07:56:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.620 resv_hugepages=0 00:03:40.620 07:56:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.620 surplus_hugepages=0 00:03:40.620 07:56:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.620 anon_hugepages=0 00:03:40.620 07:56:11 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.620 07:56:11 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:40.620 07:56:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.620 07:56:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.620 07:56:11 -- setup/common.sh@18 -- # local node= 00:03:40.620 07:56:11 -- setup/common.sh@19 -- # local var val 00:03:40.620 07:56:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.620 07:56:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.620 07:56:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.620 07:56:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.620 07:56:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.620 07:56:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110845844 kB' 'MemAvailable: 114040116 kB' 'Buffers: 4132 kB' 'Cached: 9186740 kB' 'SwapCached: 0 kB' 'Active: 6260780 kB' 'Inactive: 3507332 kB' 'Active(anon): 5872524 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580576 kB' 'Mapped: 241288 kB' 'Shmem: 5295284 kB' 'KReclaimable: 246028 kB' 'Slab: 873948 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 627920 kB' 'KernelStack: 27232 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7472896 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234516 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.620 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.620 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.883 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.883 07:56:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.883 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.883 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.883 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.883 07:56:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.883 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.883 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.883 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.883 07:56:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.883 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.883 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.883 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.883 07:56:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.883 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.883 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.883 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.883 07:56:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.883 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.883 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.883 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.883 07:56:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.883 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.883 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.883 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.883 07:56:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.883 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.883 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.883 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.883 07:56:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.883 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.883 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.883 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.883 07:56:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.883 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.883 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.883 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.883 07:56:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.883 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.883 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.883 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.883 07:56:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.883 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.883 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.884 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.884 07:56:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.884 07:56:11 -- setup/common.sh@33 -- # echo 1024 00:03:40.884 07:56:11 -- setup/common.sh@33 -- # return 0 00:03:40.884 07:56:11 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.884 07:56:11 -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.884 07:56:11 -- setup/hugepages.sh@27 -- # local node 00:03:40.884 07:56:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.884 07:56:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:40.885 07:56:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.885 07:56:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:40.885 07:56:11 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.885 07:56:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.885 07:56:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.885 07:56:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.885 07:56:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.885 07:56:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.885 07:56:11 -- setup/common.sh@18 -- # local node=0 00:03:40.885 07:56:11 -- setup/common.sh@19 -- # local var val 00:03:40.885 07:56:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.885 07:56:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.885 07:56:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.885 07:56:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.885 07:56:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.885 07:56:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 55018152 kB' 'MemUsed: 10640856 kB' 'SwapCached: 0 kB' 'Active: 4353032 kB' 'Inactive: 3272260 kB' 'Active(anon): 4174256 kB' 'Inactive(anon): 0 kB' 'Active(file): 178776 kB' 'Inactive(file): 3272260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7277432 kB' 'Mapped: 157284 kB' 'AnonPages: 351060 kB' 'Shmem: 3826396 kB' 'KernelStack: 14184 kB' 'PageTables: 4948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98144 kB' 'Slab: 416052 kB' 'SReclaimable: 98144 kB' 'SUnreclaim: 317908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.885 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.885 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@33 -- # echo 0 00:03:40.886 07:56:11 -- setup/common.sh@33 -- # return 0 00:03:40.886 07:56:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.886 07:56:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.886 07:56:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.886 07:56:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:40.886 07:56:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.886 07:56:11 -- setup/common.sh@18 -- # local node=1 00:03:40.886 07:56:11 -- setup/common.sh@19 -- # local var val 00:03:40.886 07:56:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.886 07:56:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.886 07:56:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:40.886 07:56:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:40.886 07:56:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.886 07:56:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679824 kB' 'MemFree: 55827860 kB' 'MemUsed: 4851964 kB' 'SwapCached: 0 kB' 'Active: 1907508 kB' 'Inactive: 235072 kB' 'Active(anon): 1698028 kB' 'Inactive(anon): 0 kB' 'Active(file): 209480 kB' 'Inactive(file): 235072 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1913456 kB' 'Mapped: 84004 kB' 'AnonPages: 229220 kB' 'Shmem: 1468904 kB' 'KernelStack: 13000 kB' 'PageTables: 3480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 147884 kB' 'Slab: 457896 kB' 'SReclaimable: 147884 kB' 'SUnreclaim: 310012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.886 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.886 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # continue 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.887 07:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.887 07:56:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.887 07:56:11 -- setup/common.sh@33 -- # echo 0 00:03:40.887 07:56:11 -- setup/common.sh@33 -- # return 0 00:03:40.887 07:56:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.887 07:56:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.887 07:56:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.887 07:56:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.887 07:56:11 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:40.887 node0=512 expecting 512 00:03:40.887 07:56:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.887 07:56:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.887 07:56:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.887 07:56:11 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:40.887 node1=512 expecting 512 00:03:40.887 07:56:11 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:40.887 00:03:40.887 real 0m3.669s 00:03:40.887 user 0m1.456s 00:03:40.887 sys 0m2.270s 00:03:40.887 07:56:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.887 07:56:11 -- common/autotest_common.sh@10 -- # set +x 00:03:40.887 ************************************ 00:03:40.887 END TEST per_node_1G_alloc 00:03:40.887 ************************************ 00:03:40.887 07:56:11 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:40.887 07:56:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:40.887 07:56:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:40.887 07:56:11 -- common/autotest_common.sh@10 -- # set +x 00:03:40.887 ************************************ 00:03:40.887 START TEST even_2G_alloc 00:03:40.887 ************************************ 00:03:40.887 07:56:11 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:40.887 07:56:11 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:40.887 07:56:11 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:40.887 07:56:11 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:40.887 07:56:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.887 07:56:11 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:40.887 07:56:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:40.887 07:56:11 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:40.887 07:56:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.887 07:56:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:40.887 07:56:11 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.887 07:56:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.887 07:56:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.887 07:56:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:40.887 07:56:11 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:40.887 07:56:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.887 07:56:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:40.887 07:56:11 -- setup/hugepages.sh@83 -- # : 512 00:03:40.887 07:56:11 -- setup/hugepages.sh@84 -- # : 1 00:03:40.887 07:56:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.887 07:56:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:40.887 07:56:11 -- setup/hugepages.sh@83 -- # : 0 00:03:40.887 07:56:11 -- setup/hugepages.sh@84 -- # : 0 00:03:40.887 07:56:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.887 07:56:11 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:40.887 07:56:11 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:40.887 07:56:11 -- setup/hugepages.sh@153 -- # setup output 00:03:40.887 07:56:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.887 07:56:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:44.187 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:44.187 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:44.187 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:44.187 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:44.187 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:44.187 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:44.187 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:44.187 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:44.187 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:44.187 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:44.187 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:44.187 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:44.187 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:44.187 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:44.187 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:44.187 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:44.187 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:44.452 07:56:14 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:44.452 07:56:14 -- setup/hugepages.sh@89 -- # local node 00:03:44.452 07:56:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.452 07:56:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.452 07:56:14 -- setup/hugepages.sh@92 -- # local surp 00:03:44.452 07:56:14 -- setup/hugepages.sh@93 -- # local resv 00:03:44.452 07:56:14 -- setup/hugepages.sh@94 -- # local anon 00:03:44.452 07:56:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.452 07:56:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.452 07:56:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.452 07:56:14 -- setup/common.sh@18 -- # local node= 00:03:44.452 07:56:14 -- setup/common.sh@19 -- # local var val 00:03:44.452 07:56:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.452 07:56:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.452 07:56:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.452 07:56:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.452 07:56:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.452 07:56:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110851212 kB' 'MemAvailable: 114045484 kB' 'Buffers: 4132 kB' 'Cached: 9186856 kB' 'SwapCached: 0 kB' 'Active: 6262204 kB' 'Inactive: 3507332 kB' 'Active(anon): 5873948 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581408 kB' 'Mapped: 241388 kB' 'Shmem: 5295400 kB' 'KReclaimable: 246028 kB' 'Slab: 873996 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 627968 kB' 'KernelStack: 27216 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7473284 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234468 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.452 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.452 07:56:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.453 07:56:14 -- setup/common.sh@33 -- # echo 0 00:03:44.453 07:56:14 -- setup/common.sh@33 -- # return 0 00:03:44.453 07:56:14 -- setup/hugepages.sh@97 -- # anon=0 00:03:44.453 07:56:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.453 07:56:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.453 07:56:14 -- setup/common.sh@18 -- # local node= 00:03:44.453 07:56:14 -- setup/common.sh@19 -- # local var val 00:03:44.453 07:56:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.453 07:56:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.453 07:56:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.453 07:56:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.453 07:56:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.453 07:56:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110850968 kB' 'MemAvailable: 114045240 kB' 'Buffers: 4132 kB' 'Cached: 9186864 kB' 'SwapCached: 0 kB' 'Active: 6261848 kB' 'Inactive: 3507332 kB' 'Active(anon): 5873592 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581000 kB' 'Mapped: 241380 kB' 'Shmem: 5295408 kB' 'KReclaimable: 246028 kB' 'Slab: 873980 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 627952 kB' 'KernelStack: 27168 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7473432 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234420 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.453 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.453 07:56:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.454 07:56:14 -- setup/common.sh@33 -- # echo 0 00:03:44.454 07:56:14 -- setup/common.sh@33 -- # return 0 00:03:44.454 07:56:14 -- setup/hugepages.sh@99 -- # surp=0 00:03:44.454 07:56:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.454 07:56:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.454 07:56:14 -- setup/common.sh@18 -- # local node= 00:03:44.454 07:56:14 -- setup/common.sh@19 -- # local var val 00:03:44.454 07:56:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.454 07:56:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.454 07:56:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.454 07:56:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.454 07:56:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.454 07:56:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.454 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.454 07:56:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110850548 kB' 'MemAvailable: 114044820 kB' 'Buffers: 4132 kB' 'Cached: 9186884 kB' 'SwapCached: 0 kB' 'Active: 6261172 kB' 'Inactive: 3507332 kB' 'Active(anon): 5872916 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580808 kB' 'Mapped: 241300 kB' 'Shmem: 5295428 kB' 'KReclaimable: 246028 kB' 'Slab: 873980 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 627952 kB' 'KernelStack: 27168 kB' 'PageTables: 8312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7473456 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234420 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:44.454 07:56:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.455 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.455 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.456 07:56:14 -- setup/common.sh@33 -- # echo 0 00:03:44.456 07:56:14 -- setup/common.sh@33 -- # return 0 00:03:44.456 07:56:14 -- setup/hugepages.sh@100 -- # resv=0 00:03:44.456 07:56:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.456 nr_hugepages=1024 00:03:44.456 07:56:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.456 resv_hugepages=0 00:03:44.456 07:56:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.456 surplus_hugepages=0 00:03:44.456 07:56:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.456 anon_hugepages=0 00:03:44.456 07:56:14 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.456 07:56:14 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.456 07:56:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.456 07:56:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.456 07:56:14 -- setup/common.sh@18 -- # local node= 00:03:44.456 07:56:14 -- setup/common.sh@19 -- # local var val 00:03:44.456 07:56:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.456 07:56:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.456 07:56:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.456 07:56:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.456 07:56:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.456 07:56:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110850364 kB' 'MemAvailable: 114044636 kB' 'Buffers: 4132 kB' 'Cached: 9186908 kB' 'SwapCached: 0 kB' 'Active: 6261396 kB' 'Inactive: 3507332 kB' 'Active(anon): 5873140 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581000 kB' 'Mapped: 241300 kB' 'Shmem: 5295452 kB' 'KReclaimable: 246028 kB' 'Slab: 873980 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 627952 kB' 'KernelStack: 27200 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7473964 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234436 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.456 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.456 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # continue 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.457 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.457 07:56:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.457 07:56:14 -- setup/common.sh@33 -- # echo 1024 00:03:44.457 07:56:14 -- setup/common.sh@33 -- # return 0 00:03:44.457 07:56:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.457 07:56:14 -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.457 07:56:14 -- setup/hugepages.sh@27 -- # local node 00:03:44.457 07:56:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.457 07:56:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:44.457 07:56:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.457 07:56:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:44.457 07:56:14 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.457 07:56:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.457 07:56:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.457 07:56:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.457 07:56:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.457 07:56:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.457 07:56:14 -- setup/common.sh@18 -- # local node=0 00:03:44.457 07:56:14 -- setup/common.sh@19 -- # local var val 00:03:44.457 07:56:14 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.457 07:56:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.457 07:56:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.458 07:56:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.458 07:56:14 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.458 07:56:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.458 07:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 55033152 kB' 'MemUsed: 10625856 kB' 'SwapCached: 0 kB' 'Active: 4354164 kB' 'Inactive: 3272260 kB' 'Active(anon): 4175388 kB' 'Inactive(anon): 0 kB' 'Active(file): 178776 kB' 'Inactive(file): 3272260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7277600 kB' 'Mapped: 157300 kB' 'AnonPages: 352096 kB' 'Shmem: 3826564 kB' 'KernelStack: 14216 kB' 'PageTables: 5036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98144 kB' 'Slab: 415792 kB' 'SReclaimable: 98144 kB' 'SUnreclaim: 317648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.458 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.458 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@33 -- # echo 0 00:03:44.459 07:56:15 -- setup/common.sh@33 -- # return 0 00:03:44.459 07:56:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.459 07:56:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.459 07:56:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.459 07:56:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:44.459 07:56:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.459 07:56:15 -- setup/common.sh@18 -- # local node=1 00:03:44.459 07:56:15 -- setup/common.sh@19 -- # local var val 00:03:44.459 07:56:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.459 07:56:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.459 07:56:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:44.459 07:56:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:44.459 07:56:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.459 07:56:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679824 kB' 'MemFree: 55816456 kB' 'MemUsed: 4863368 kB' 'SwapCached: 0 kB' 'Active: 1907252 kB' 'Inactive: 235072 kB' 'Active(anon): 1697772 kB' 'Inactive(anon): 0 kB' 'Active(file): 209480 kB' 'Inactive(file): 235072 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1913456 kB' 'Mapped: 84000 kB' 'AnonPages: 228904 kB' 'Shmem: 1468904 kB' 'KernelStack: 12984 kB' 'PageTables: 3432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 147884 kB' 'Slab: 458188 kB' 'SReclaimable: 147884 kB' 'SUnreclaim: 310304 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.459 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.459 07:56:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.460 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.460 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.460 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.460 07:56:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.460 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.460 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.460 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.460 07:56:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.460 07:56:15 -- setup/common.sh@32 -- # continue 00:03:44.460 07:56:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.460 07:56:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.460 07:56:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.460 07:56:15 -- setup/common.sh@33 -- # echo 0 00:03:44.460 07:56:15 -- setup/common.sh@33 -- # return 0 00:03:44.460 07:56:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.460 07:56:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.460 07:56:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.460 07:56:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.460 07:56:15 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:44.460 node0=512 expecting 512 00:03:44.460 07:56:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.460 07:56:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.460 07:56:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.460 07:56:15 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:44.460 node1=512 expecting 512 00:03:44.460 07:56:15 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:44.460 00:03:44.460 real 0m3.667s 00:03:44.460 user 0m1.498s 00:03:44.460 sys 0m2.223s 00:03:44.460 07:56:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.460 07:56:15 -- common/autotest_common.sh@10 -- # set +x 00:03:44.460 ************************************ 00:03:44.460 END TEST even_2G_alloc 00:03:44.460 ************************************ 00:03:44.460 07:56:15 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:44.460 07:56:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:44.460 07:56:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:44.460 07:56:15 -- common/autotest_common.sh@10 -- # set +x 00:03:44.460 ************************************ 00:03:44.460 START TEST odd_alloc 00:03:44.460 ************************************ 00:03:44.460 07:56:15 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:44.460 07:56:15 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:44.460 07:56:15 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:44.460 07:56:15 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:44.460 07:56:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.460 07:56:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:44.460 07:56:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:44.460 07:56:15 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:44.460 07:56:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.460 07:56:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:44.460 07:56:15 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.460 07:56:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.460 07:56:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.460 07:56:15 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:44.460 07:56:15 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:44.460 07:56:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.460 07:56:15 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:44.460 07:56:15 -- setup/hugepages.sh@83 -- # : 513 00:03:44.460 07:56:15 -- setup/hugepages.sh@84 -- # : 1 00:03:44.460 07:56:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.460 07:56:15 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:44.460 07:56:15 -- setup/hugepages.sh@83 -- # : 0 00:03:44.460 07:56:15 -- setup/hugepages.sh@84 -- # : 0 00:03:44.460 07:56:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.460 07:56:15 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:44.460 07:56:15 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:44.460 07:56:15 -- setup/hugepages.sh@160 -- # setup output 00:03:44.460 07:56:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.460 07:56:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:48.676 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:48.676 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:48.676 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:48.676 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:48.676 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:48.676 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:48.676 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:48.676 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:48.676 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:48.676 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:48.676 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:48.676 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:48.676 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:48.676 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:48.676 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:48.676 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:48.676 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:48.676 07:56:18 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:48.676 07:56:18 -- setup/hugepages.sh@89 -- # local node 00:03:48.676 07:56:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.676 07:56:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.676 07:56:18 -- setup/hugepages.sh@92 -- # local surp 00:03:48.676 07:56:18 -- setup/hugepages.sh@93 -- # local resv 00:03:48.676 07:56:18 -- setup/hugepages.sh@94 -- # local anon 00:03:48.676 07:56:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.676 07:56:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.676 07:56:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.676 07:56:18 -- setup/common.sh@18 -- # local node= 00:03:48.676 07:56:18 -- setup/common.sh@19 -- # local var val 00:03:48.676 07:56:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.676 07:56:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.676 07:56:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.676 07:56:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.676 07:56:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.676 07:56:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110836756 kB' 'MemAvailable: 114031028 kB' 'Buffers: 4132 kB' 'Cached: 9187012 kB' 'SwapCached: 0 kB' 'Active: 6264044 kB' 'Inactive: 3507332 kB' 'Active(anon): 5875788 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583176 kB' 'Mapped: 241324 kB' 'Shmem: 5295556 kB' 'KReclaimable: 246028 kB' 'Slab: 874000 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 627972 kB' 'KernelStack: 27232 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508420 kB' 'Committed_AS: 7478244 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234452 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.676 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.676 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.677 07:56:18 -- setup/common.sh@33 -- # echo 0 00:03:48.677 07:56:18 -- setup/common.sh@33 -- # return 0 00:03:48.677 07:56:18 -- setup/hugepages.sh@97 -- # anon=0 00:03:48.677 07:56:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.677 07:56:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.677 07:56:18 -- setup/common.sh@18 -- # local node= 00:03:48.677 07:56:18 -- setup/common.sh@19 -- # local var val 00:03:48.677 07:56:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.677 07:56:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.677 07:56:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.677 07:56:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.677 07:56:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.677 07:56:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110838792 kB' 'MemAvailable: 114033064 kB' 'Buffers: 4132 kB' 'Cached: 9187016 kB' 'SwapCached: 0 kB' 'Active: 6263040 kB' 'Inactive: 3507332 kB' 'Active(anon): 5874784 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582556 kB' 'Mapped: 241240 kB' 'Shmem: 5295560 kB' 'KReclaimable: 246028 kB' 'Slab: 874032 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 628004 kB' 'KernelStack: 27280 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508420 kB' 'Committed_AS: 7478016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234372 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.677 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.677 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.678 07:56:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.678 07:56:18 -- setup/common.sh@33 -- # echo 0 00:03:48.678 07:56:18 -- setup/common.sh@33 -- # return 0 00:03:48.678 07:56:18 -- setup/hugepages.sh@99 -- # surp=0 00:03:48.678 07:56:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.678 07:56:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.678 07:56:18 -- setup/common.sh@18 -- # local node= 00:03:48.678 07:56:18 -- setup/common.sh@19 -- # local var val 00:03:48.678 07:56:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.678 07:56:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.678 07:56:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.678 07:56:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.678 07:56:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.678 07:56:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.678 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110838596 kB' 'MemAvailable: 114032868 kB' 'Buffers: 4132 kB' 'Cached: 9187028 kB' 'SwapCached: 0 kB' 'Active: 6263312 kB' 'Inactive: 3507332 kB' 'Active(anon): 5875056 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582888 kB' 'Mapped: 241240 kB' 'Shmem: 5295572 kB' 'KReclaimable: 246028 kB' 'Slab: 874032 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 628004 kB' 'KernelStack: 27440 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508420 kB' 'Committed_AS: 7479676 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234468 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.679 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.679 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.680 07:56:18 -- setup/common.sh@33 -- # echo 0 00:03:48.680 07:56:18 -- setup/common.sh@33 -- # return 0 00:03:48.680 07:56:18 -- setup/hugepages.sh@100 -- # resv=0 00:03:48.680 07:56:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:48.680 nr_hugepages=1025 00:03:48.680 07:56:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.680 resv_hugepages=0 00:03:48.680 07:56:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.680 surplus_hugepages=0 00:03:48.680 07:56:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.680 anon_hugepages=0 00:03:48.680 07:56:18 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:48.680 07:56:18 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:48.680 07:56:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.680 07:56:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.680 07:56:18 -- setup/common.sh@18 -- # local node= 00:03:48.680 07:56:18 -- setup/common.sh@19 -- # local var val 00:03:48.680 07:56:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.680 07:56:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.680 07:56:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.680 07:56:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.680 07:56:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.680 07:56:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110839380 kB' 'MemAvailable: 114033652 kB' 'Buffers: 4132 kB' 'Cached: 9187044 kB' 'SwapCached: 0 kB' 'Active: 6264196 kB' 'Inactive: 3507332 kB' 'Active(anon): 5875940 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583796 kB' 'Mapped: 241240 kB' 'Shmem: 5295588 kB' 'KReclaimable: 246028 kB' 'Slab: 874032 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 628004 kB' 'KernelStack: 27440 kB' 'PageTables: 9240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508420 kB' 'Committed_AS: 7478048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234468 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.680 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.680 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.681 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.681 07:56:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.681 07:56:18 -- setup/common.sh@33 -- # echo 1025 00:03:48.681 07:56:18 -- setup/common.sh@33 -- # return 0 00:03:48.682 07:56:18 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:48.682 07:56:18 -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.682 07:56:18 -- setup/hugepages.sh@27 -- # local node 00:03:48.682 07:56:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.682 07:56:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.682 07:56:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.682 07:56:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:48.682 07:56:18 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.682 07:56:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.682 07:56:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.682 07:56:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.682 07:56:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.682 07:56:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.682 07:56:18 -- setup/common.sh@18 -- # local node=0 00:03:48.682 07:56:18 -- setup/common.sh@19 -- # local var val 00:03:48.682 07:56:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.682 07:56:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.682 07:56:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.682 07:56:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.682 07:56:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.682 07:56:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 55037368 kB' 'MemUsed: 10621640 kB' 'SwapCached: 0 kB' 'Active: 4355992 kB' 'Inactive: 3272260 kB' 'Active(anon): 4177216 kB' 'Inactive(anon): 0 kB' 'Active(file): 178776 kB' 'Inactive(file): 3272260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7277732 kB' 'Mapped: 157240 kB' 'AnonPages: 353836 kB' 'Shmem: 3826696 kB' 'KernelStack: 14280 kB' 'PageTables: 5292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98144 kB' 'Slab: 415848 kB' 'SReclaimable: 98144 kB' 'SUnreclaim: 317704 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.682 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.682 07:56:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@33 -- # echo 0 00:03:48.683 07:56:18 -- setup/common.sh@33 -- # return 0 00:03:48.683 07:56:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.683 07:56:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.683 07:56:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.683 07:56:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:48.683 07:56:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.683 07:56:18 -- setup/common.sh@18 -- # local node=1 00:03:48.683 07:56:18 -- setup/common.sh@19 -- # local var val 00:03:48.683 07:56:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.683 07:56:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.683 07:56:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:48.683 07:56:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:48.683 07:56:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.683 07:56:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679824 kB' 'MemFree: 55807280 kB' 'MemUsed: 4872544 kB' 'SwapCached: 0 kB' 'Active: 1908380 kB' 'Inactive: 235072 kB' 'Active(anon): 1698900 kB' 'Inactive(anon): 0 kB' 'Active(file): 209480 kB' 'Inactive(file): 235072 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1913456 kB' 'Mapped: 84000 kB' 'AnonPages: 230116 kB' 'Shmem: 1468904 kB' 'KernelStack: 13080 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 147884 kB' 'Slab: 458184 kB' 'SReclaimable: 147884 kB' 'SUnreclaim: 310300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.683 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.683 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.684 07:56:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.684 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.684 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.684 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.684 07:56:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.684 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.684 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.684 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.684 07:56:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.684 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.684 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.684 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.684 07:56:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.684 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.684 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.684 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.684 07:56:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.684 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.684 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.684 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.684 07:56:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.684 07:56:18 -- setup/common.sh@32 -- # continue 00:03:48.684 07:56:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.684 07:56:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.684 07:56:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.684 07:56:18 -- setup/common.sh@33 -- # echo 0 00:03:48.684 07:56:18 -- setup/common.sh@33 -- # return 0 00:03:48.684 07:56:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.684 07:56:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.684 07:56:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.684 07:56:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.684 07:56:18 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:48.684 node0=512 expecting 513 00:03:48.684 07:56:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.684 07:56:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.684 07:56:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.684 07:56:18 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:48.684 node1=513 expecting 512 00:03:48.684 07:56:18 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:48.684 00:03:48.684 real 0m3.692s 00:03:48.684 user 0m1.460s 00:03:48.684 sys 0m2.284s 00:03:48.684 07:56:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.684 07:56:18 -- common/autotest_common.sh@10 -- # set +x 00:03:48.684 ************************************ 00:03:48.684 END TEST odd_alloc 00:03:48.684 ************************************ 00:03:48.684 07:56:18 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:48.684 07:56:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:48.684 07:56:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:48.684 07:56:18 -- common/autotest_common.sh@10 -- # set +x 00:03:48.684 ************************************ 00:03:48.684 START TEST custom_alloc 00:03:48.684 ************************************ 00:03:48.684 07:56:18 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:48.684 07:56:18 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:48.684 07:56:18 -- setup/hugepages.sh@169 -- # local node 00:03:48.684 07:56:18 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:48.684 07:56:18 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:48.684 07:56:18 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:48.684 07:56:18 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:48.684 07:56:18 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:48.684 07:56:18 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.684 07:56:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.684 07:56:18 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:48.684 07:56:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.684 07:56:18 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.684 07:56:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.684 07:56:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:48.684 07:56:18 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.684 07:56:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.684 07:56:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.684 07:56:18 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.684 07:56:18 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:48.684 07:56:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.684 07:56:18 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:48.684 07:56:18 -- setup/hugepages.sh@83 -- # : 256 00:03:48.684 07:56:18 -- setup/hugepages.sh@84 -- # : 1 00:03:48.684 07:56:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.684 07:56:18 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:48.684 07:56:18 -- setup/hugepages.sh@83 -- # : 0 00:03:48.684 07:56:18 -- setup/hugepages.sh@84 -- # : 0 00:03:48.684 07:56:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.684 07:56:18 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:48.684 07:56:18 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:48.684 07:56:18 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:48.684 07:56:18 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:48.684 07:56:18 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.684 07:56:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.684 07:56:18 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:48.684 07:56:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.684 07:56:18 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.684 07:56:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.684 07:56:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.684 07:56:18 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.684 07:56:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.684 07:56:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.684 07:56:18 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.684 07:56:18 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:48.684 07:56:18 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:48.684 07:56:18 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:48.684 07:56:18 -- setup/hugepages.sh@78 -- # return 0 00:03:48.684 07:56:18 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:48.684 07:56:18 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:48.684 07:56:18 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:48.684 07:56:18 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:48.684 07:56:18 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:48.684 07:56:18 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:48.684 07:56:18 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:48.684 07:56:18 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:48.684 07:56:18 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.684 07:56:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.684 07:56:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.684 07:56:18 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.684 07:56:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.684 07:56:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.684 07:56:18 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.684 07:56:18 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:48.684 07:56:18 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:48.684 07:56:18 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:48.684 07:56:18 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:48.684 07:56:18 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:48.684 07:56:18 -- setup/hugepages.sh@78 -- # return 0 00:03:48.684 07:56:18 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:48.684 07:56:18 -- setup/hugepages.sh@187 -- # setup output 00:03:48.684 07:56:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.684 07:56:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:51.993 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:51.993 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:51.993 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:51.993 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:51.993 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:51.993 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:51.993 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:51.993 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:51.993 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:51.993 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:51.993 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:51.993 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:51.993 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:51.993 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:51.993 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:51.993 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:51.993 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:51.993 07:56:22 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:51.993 07:56:22 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:51.993 07:56:22 -- setup/hugepages.sh@89 -- # local node 00:03:51.993 07:56:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:51.993 07:56:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:51.993 07:56:22 -- setup/hugepages.sh@92 -- # local surp 00:03:51.993 07:56:22 -- setup/hugepages.sh@93 -- # local resv 00:03:51.993 07:56:22 -- setup/hugepages.sh@94 -- # local anon 00:03:51.993 07:56:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:51.993 07:56:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:51.993 07:56:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:51.993 07:56:22 -- setup/common.sh@18 -- # local node= 00:03:51.993 07:56:22 -- setup/common.sh@19 -- # local var val 00:03:51.993 07:56:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.993 07:56:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.993 07:56:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.993 07:56:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.993 07:56:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.993 07:56:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.993 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.993 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 109806284 kB' 'MemAvailable: 113000556 kB' 'Buffers: 4132 kB' 'Cached: 9187164 kB' 'SwapCached: 0 kB' 'Active: 6264040 kB' 'Inactive: 3507332 kB' 'Active(anon): 5875784 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582868 kB' 'Mapped: 241760 kB' 'Shmem: 5295708 kB' 'KReclaimable: 246028 kB' 'Slab: 873856 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 627828 kB' 'KernelStack: 27312 kB' 'PageTables: 8780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985156 kB' 'Committed_AS: 7478812 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234580 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.994 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.994 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.995 07:56:22 -- setup/common.sh@33 -- # echo 0 00:03:51.995 07:56:22 -- setup/common.sh@33 -- # return 0 00:03:51.995 07:56:22 -- setup/hugepages.sh@97 -- # anon=0 00:03:51.995 07:56:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:51.995 07:56:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.995 07:56:22 -- setup/common.sh@18 -- # local node= 00:03:51.995 07:56:22 -- setup/common.sh@19 -- # local var val 00:03:51.995 07:56:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.995 07:56:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.995 07:56:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.995 07:56:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.995 07:56:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.995 07:56:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 109806900 kB' 'MemAvailable: 113001172 kB' 'Buffers: 4132 kB' 'Cached: 9187168 kB' 'SwapCached: 0 kB' 'Active: 6263748 kB' 'Inactive: 3507332 kB' 'Active(anon): 5875492 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583144 kB' 'Mapped: 241332 kB' 'Shmem: 5295712 kB' 'KReclaimable: 246028 kB' 'Slab: 873852 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 627824 kB' 'KernelStack: 27264 kB' 'PageTables: 8704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985156 kB' 'Committed_AS: 7480468 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234628 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.995 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.995 07:56:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.996 07:56:22 -- setup/common.sh@33 -- # echo 0 00:03:51.996 07:56:22 -- setup/common.sh@33 -- # return 0 00:03:51.996 07:56:22 -- setup/hugepages.sh@99 -- # surp=0 00:03:51.996 07:56:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:51.996 07:56:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:51.996 07:56:22 -- setup/common.sh@18 -- # local node= 00:03:51.996 07:56:22 -- setup/common.sh@19 -- # local var val 00:03:51.996 07:56:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.996 07:56:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.996 07:56:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.996 07:56:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.996 07:56:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.996 07:56:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.996 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.996 07:56:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 109805600 kB' 'MemAvailable: 112999872 kB' 'Buffers: 4132 kB' 'Cached: 9187180 kB' 'SwapCached: 0 kB' 'Active: 6263576 kB' 'Inactive: 3507332 kB' 'Active(anon): 5875320 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582944 kB' 'Mapped: 241332 kB' 'Shmem: 5295724 kB' 'KReclaimable: 246028 kB' 'Slab: 873852 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 627824 kB' 'KernelStack: 27360 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985156 kB' 'Committed_AS: 7478840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234628 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.997 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.997 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.998 07:56:22 -- setup/common.sh@33 -- # echo 0 00:03:51.998 07:56:22 -- setup/common.sh@33 -- # return 0 00:03:51.998 07:56:22 -- setup/hugepages.sh@100 -- # resv=0 00:03:51.998 07:56:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:51.998 nr_hugepages=1536 00:03:51.998 07:56:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:51.998 resv_hugepages=0 00:03:51.998 07:56:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:51.998 surplus_hugepages=0 00:03:51.998 07:56:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:51.998 anon_hugepages=0 00:03:51.998 07:56:22 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:51.998 07:56:22 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:51.998 07:56:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:51.998 07:56:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:51.998 07:56:22 -- setup/common.sh@18 -- # local node= 00:03:51.998 07:56:22 -- setup/common.sh@19 -- # local var val 00:03:51.998 07:56:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.998 07:56:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.998 07:56:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.998 07:56:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.998 07:56:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.998 07:56:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 109806232 kB' 'MemAvailable: 113000504 kB' 'Buffers: 4132 kB' 'Cached: 9187192 kB' 'SwapCached: 0 kB' 'Active: 6264208 kB' 'Inactive: 3507332 kB' 'Active(anon): 5875952 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583604 kB' 'Mapped: 241332 kB' 'Shmem: 5295736 kB' 'KReclaimable: 246028 kB' 'Slab: 873856 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 627828 kB' 'KernelStack: 27440 kB' 'PageTables: 9152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985156 kB' 'Committed_AS: 7481260 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234628 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.998 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.998 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # continue 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.999 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.999 07:56:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.999 07:56:22 -- setup/common.sh@33 -- # echo 1536 00:03:51.999 07:56:22 -- setup/common.sh@33 -- # return 0 00:03:51.999 07:56:22 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:51.999 07:56:22 -- setup/hugepages.sh@112 -- # get_nodes 00:03:51.999 07:56:22 -- setup/hugepages.sh@27 -- # local node 00:03:51.999 07:56:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.000 07:56:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.000 07:56:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.000 07:56:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:52.000 07:56:22 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.000 07:56:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.000 07:56:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.000 07:56:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.000 07:56:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.000 07:56:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.000 07:56:22 -- setup/common.sh@18 -- # local node=0 00:03:52.000 07:56:22 -- setup/common.sh@19 -- # local var val 00:03:52.000 07:56:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.000 07:56:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.000 07:56:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.000 07:56:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.000 07:56:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.000 07:56:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 55043660 kB' 'MemUsed: 10615348 kB' 'SwapCached: 0 kB' 'Active: 4354768 kB' 'Inactive: 3272260 kB' 'Active(anon): 4175992 kB' 'Inactive(anon): 0 kB' 'Active(file): 178776 kB' 'Inactive(file): 3272260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7277808 kB' 'Mapped: 157320 kB' 'AnonPages: 352372 kB' 'Shmem: 3826772 kB' 'KernelStack: 14232 kB' 'PageTables: 5096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98144 kB' 'Slab: 415588 kB' 'SReclaimable: 98144 kB' 'SUnreclaim: 317444 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.000 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.000 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@33 -- # echo 0 00:03:52.001 07:56:22 -- setup/common.sh@33 -- # return 0 00:03:52.001 07:56:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.001 07:56:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.001 07:56:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.001 07:56:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:52.001 07:56:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.001 07:56:22 -- setup/common.sh@18 -- # local node=1 00:03:52.001 07:56:22 -- setup/common.sh@19 -- # local var val 00:03:52.001 07:56:22 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.001 07:56:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.001 07:56:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.001 07:56:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.001 07:56:22 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.001 07:56:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679824 kB' 'MemFree: 54762272 kB' 'MemUsed: 5917552 kB' 'SwapCached: 0 kB' 'Active: 1909196 kB' 'Inactive: 235072 kB' 'Active(anon): 1699716 kB' 'Inactive(anon): 0 kB' 'Active(file): 209480 kB' 'Inactive(file): 235072 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1913544 kB' 'Mapped: 84004 kB' 'AnonPages: 230956 kB' 'Shmem: 1468992 kB' 'KernelStack: 13032 kB' 'PageTables: 3612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 147884 kB' 'Slab: 458268 kB' 'SReclaimable: 147884 kB' 'SUnreclaim: 310384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.001 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.001 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.002 07:56:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.002 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.002 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.002 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.002 07:56:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.002 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.002 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.002 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.002 07:56:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.002 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.002 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.002 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.002 07:56:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.002 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.002 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.002 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.002 07:56:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.002 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.002 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.002 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.002 07:56:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.002 07:56:22 -- setup/common.sh@32 -- # continue 00:03:52.002 07:56:22 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.002 07:56:22 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.002 07:56:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.002 07:56:22 -- setup/common.sh@33 -- # echo 0 00:03:52.002 07:56:22 -- setup/common.sh@33 -- # return 0 00:03:52.002 07:56:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.002 07:56:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.002 07:56:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.002 07:56:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.002 07:56:22 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:52.002 node0=512 expecting 512 00:03:52.002 07:56:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.002 07:56:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.002 07:56:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.002 07:56:22 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:52.002 node1=1024 expecting 1024 00:03:52.002 07:56:22 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:52.002 00:03:52.002 real 0m3.728s 00:03:52.002 user 0m1.550s 00:03:52.002 sys 0m2.242s 00:03:52.002 07:56:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.002 07:56:22 -- common/autotest_common.sh@10 -- # set +x 00:03:52.002 ************************************ 00:03:52.002 END TEST custom_alloc 00:03:52.002 ************************************ 00:03:52.002 07:56:22 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:52.002 07:56:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:52.002 07:56:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:52.002 07:56:22 -- common/autotest_common.sh@10 -- # set +x 00:03:52.002 ************************************ 00:03:52.002 START TEST no_shrink_alloc 00:03:52.002 ************************************ 00:03:52.002 07:56:22 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:52.002 07:56:22 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:52.002 07:56:22 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:52.002 07:56:22 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:52.002 07:56:22 -- setup/hugepages.sh@51 -- # shift 00:03:52.002 07:56:22 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:52.002 07:56:22 -- setup/hugepages.sh@52 -- # local node_ids 00:03:52.002 07:56:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.002 07:56:22 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:52.002 07:56:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:52.002 07:56:22 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:52.002 07:56:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.002 07:56:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:52.002 07:56:22 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.002 07:56:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.002 07:56:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.002 07:56:22 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:52.002 07:56:22 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:52.002 07:56:22 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:52.002 07:56:22 -- setup/hugepages.sh@73 -- # return 0 00:03:52.002 07:56:22 -- setup/hugepages.sh@198 -- # setup output 00:03:52.002 07:56:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.002 07:56:22 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.304 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.304 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.304 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.304 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.304 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.304 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.304 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.304 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.304 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.304 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:55.304 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.304 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.304 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.304 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.304 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.304 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.304 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.570 07:56:26 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:55.570 07:56:26 -- setup/hugepages.sh@89 -- # local node 00:03:55.570 07:56:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.570 07:56:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.570 07:56:26 -- setup/hugepages.sh@92 -- # local surp 00:03:55.570 07:56:26 -- setup/hugepages.sh@93 -- # local resv 00:03:55.570 07:56:26 -- setup/hugepages.sh@94 -- # local anon 00:03:55.570 07:56:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.570 07:56:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.570 07:56:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.570 07:56:26 -- setup/common.sh@18 -- # local node= 00:03:55.570 07:56:26 -- setup/common.sh@19 -- # local var val 00:03:55.570 07:56:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.570 07:56:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.570 07:56:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.570 07:56:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.570 07:56:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.570 07:56:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.570 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.570 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.570 07:56:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110842780 kB' 'MemAvailable: 114037052 kB' 'Buffers: 4132 kB' 'Cached: 9187312 kB' 'SwapCached: 0 kB' 'Active: 6264972 kB' 'Inactive: 3507332 kB' 'Active(anon): 5876716 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583728 kB' 'Mapped: 241440 kB' 'Shmem: 5295856 kB' 'KReclaimable: 246028 kB' 'Slab: 874260 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 628232 kB' 'KernelStack: 27264 kB' 'PageTables: 8732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7476324 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234612 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:55.570 07:56:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.570 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.570 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.570 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.570 07:56:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.570 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.570 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.570 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.570 07:56:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.570 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.570 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.570 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.570 07:56:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.570 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.570 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.570 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.570 07:56:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.570 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.570 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.570 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.570 07:56:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.570 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.570 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.570 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.570 07:56:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.570 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.570 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.570 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.570 07:56:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.570 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.570 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.570 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.570 07:56:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.570 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.570 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.570 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.571 07:56:26 -- setup/common.sh@33 -- # echo 0 00:03:55.571 07:56:26 -- setup/common.sh@33 -- # return 0 00:03:55.571 07:56:26 -- setup/hugepages.sh@97 -- # anon=0 00:03:55.571 07:56:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.571 07:56:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.571 07:56:26 -- setup/common.sh@18 -- # local node= 00:03:55.571 07:56:26 -- setup/common.sh@19 -- # local var val 00:03:55.571 07:56:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.571 07:56:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.571 07:56:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.571 07:56:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.571 07:56:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.571 07:56:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110843376 kB' 'MemAvailable: 114037648 kB' 'Buffers: 4132 kB' 'Cached: 9187316 kB' 'SwapCached: 0 kB' 'Active: 6264660 kB' 'Inactive: 3507332 kB' 'Active(anon): 5876404 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583436 kB' 'Mapped: 241440 kB' 'Shmem: 5295860 kB' 'KReclaimable: 246028 kB' 'Slab: 874260 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 628232 kB' 'KernelStack: 27264 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7476336 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234580 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.571 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.571 07:56:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.572 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.572 07:56:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.573 07:56:26 -- setup/common.sh@33 -- # echo 0 00:03:55.573 07:56:26 -- setup/common.sh@33 -- # return 0 00:03:55.573 07:56:26 -- setup/hugepages.sh@99 -- # surp=0 00:03:55.573 07:56:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.573 07:56:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.573 07:56:26 -- setup/common.sh@18 -- # local node= 00:03:55.573 07:56:26 -- setup/common.sh@19 -- # local var val 00:03:55.573 07:56:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.573 07:56:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.573 07:56:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.573 07:56:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.573 07:56:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.573 07:56:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.573 07:56:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110844184 kB' 'MemAvailable: 114038456 kB' 'Buffers: 4132 kB' 'Cached: 9187328 kB' 'SwapCached: 0 kB' 'Active: 6263996 kB' 'Inactive: 3507332 kB' 'Active(anon): 5875740 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583204 kB' 'Mapped: 241356 kB' 'Shmem: 5295872 kB' 'KReclaimable: 246028 kB' 'Slab: 874252 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 628224 kB' 'KernelStack: 27248 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7476352 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234580 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.573 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.573 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.574 07:56:26 -- setup/common.sh@33 -- # echo 0 00:03:55.574 07:56:26 -- setup/common.sh@33 -- # return 0 00:03:55.574 07:56:26 -- setup/hugepages.sh@100 -- # resv=0 00:03:55.574 07:56:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:55.574 nr_hugepages=1024 00:03:55.574 07:56:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.574 resv_hugepages=0 00:03:55.574 07:56:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.574 surplus_hugepages=0 00:03:55.574 07:56:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.574 anon_hugepages=0 00:03:55.574 07:56:26 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.574 07:56:26 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:55.574 07:56:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.574 07:56:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.574 07:56:26 -- setup/common.sh@18 -- # local node= 00:03:55.574 07:56:26 -- setup/common.sh@19 -- # local var val 00:03:55.574 07:56:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.574 07:56:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.574 07:56:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.574 07:56:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.574 07:56:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.574 07:56:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110844128 kB' 'MemAvailable: 114038400 kB' 'Buffers: 4132 kB' 'Cached: 9187352 kB' 'SwapCached: 0 kB' 'Active: 6263596 kB' 'Inactive: 3507332 kB' 'Active(anon): 5875340 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582740 kB' 'Mapped: 241356 kB' 'Shmem: 5295896 kB' 'KReclaimable: 246028 kB' 'Slab: 874252 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 628224 kB' 'KernelStack: 27200 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7476368 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234580 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.574 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.574 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.575 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.575 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.576 07:56:26 -- setup/common.sh@33 -- # echo 1024 00:03:55.576 07:56:26 -- setup/common.sh@33 -- # return 0 00:03:55.576 07:56:26 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.576 07:56:26 -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.576 07:56:26 -- setup/hugepages.sh@27 -- # local node 00:03:55.576 07:56:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.576 07:56:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:55.576 07:56:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.576 07:56:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:55.576 07:56:26 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.576 07:56:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.576 07:56:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.576 07:56:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.576 07:56:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.576 07:56:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.576 07:56:26 -- setup/common.sh@18 -- # local node=0 00:03:55.576 07:56:26 -- setup/common.sh@19 -- # local var val 00:03:55.576 07:56:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.576 07:56:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.576 07:56:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.576 07:56:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.576 07:56:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.576 07:56:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.576 07:56:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54003840 kB' 'MemUsed: 11655168 kB' 'SwapCached: 0 kB' 'Active: 4354380 kB' 'Inactive: 3272260 kB' 'Active(anon): 4175604 kB' 'Inactive(anon): 0 kB' 'Active(file): 178776 kB' 'Inactive(file): 3272260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7277820 kB' 'Mapped: 157356 kB' 'AnonPages: 351992 kB' 'Shmem: 3826784 kB' 'KernelStack: 14200 kB' 'PageTables: 5008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98144 kB' 'Slab: 416008 kB' 'SReclaimable: 98144 kB' 'SUnreclaim: 317864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.576 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.576 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # continue 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 07:56:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.577 07:56:26 -- setup/common.sh@33 -- # echo 0 00:03:55.577 07:56:26 -- setup/common.sh@33 -- # return 0 00:03:55.577 07:56:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.577 07:56:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.577 07:56:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.577 07:56:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.577 07:56:26 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:55.577 node0=1024 expecting 1024 00:03:55.577 07:56:26 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:55.577 07:56:26 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:55.577 07:56:26 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:55.577 07:56:26 -- setup/hugepages.sh@202 -- # setup output 00:03:55.577 07:56:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.577 07:56:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:58.881 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:58.881 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:58.881 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:58.881 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:58.881 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:58.881 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:58.881 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:58.881 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:58.881 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:58.881 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:58.881 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:58.881 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:58.881 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:58.881 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:58.881 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:58.881 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:58.882 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:58.882 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:58.882 07:56:29 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:58.882 07:56:29 -- setup/hugepages.sh@89 -- # local node 00:03:58.882 07:56:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.882 07:56:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.882 07:56:29 -- setup/hugepages.sh@92 -- # local surp 00:03:58.882 07:56:29 -- setup/hugepages.sh@93 -- # local resv 00:03:58.882 07:56:29 -- setup/hugepages.sh@94 -- # local anon 00:03:58.882 07:56:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.882 07:56:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.882 07:56:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.882 07:56:29 -- setup/common.sh@18 -- # local node= 00:03:58.882 07:56:29 -- setup/common.sh@19 -- # local var val 00:03:58.882 07:56:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.882 07:56:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.882 07:56:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.882 07:56:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.882 07:56:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.882 07:56:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110867276 kB' 'MemAvailable: 114061548 kB' 'Buffers: 4132 kB' 'Cached: 9187436 kB' 'SwapCached: 0 kB' 'Active: 6265600 kB' 'Inactive: 3507332 kB' 'Active(anon): 5877344 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584136 kB' 'Mapped: 241448 kB' 'Shmem: 5295980 kB' 'KReclaimable: 246028 kB' 'Slab: 874224 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 628196 kB' 'KernelStack: 27232 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7477104 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234532 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.882 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.882 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.883 07:56:29 -- setup/common.sh@33 -- # echo 0 00:03:58.883 07:56:29 -- setup/common.sh@33 -- # return 0 00:03:58.883 07:56:29 -- setup/hugepages.sh@97 -- # anon=0 00:03:58.883 07:56:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.883 07:56:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.883 07:56:29 -- setup/common.sh@18 -- # local node= 00:03:58.883 07:56:29 -- setup/common.sh@19 -- # local var val 00:03:58.883 07:56:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.883 07:56:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.883 07:56:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.883 07:56:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.883 07:56:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.883 07:56:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110868524 kB' 'MemAvailable: 114062796 kB' 'Buffers: 4132 kB' 'Cached: 9187440 kB' 'SwapCached: 0 kB' 'Active: 6265280 kB' 'Inactive: 3507332 kB' 'Active(anon): 5877024 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583828 kB' 'Mapped: 241444 kB' 'Shmem: 5295984 kB' 'KReclaimable: 246028 kB' 'Slab: 874216 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 628188 kB' 'KernelStack: 27216 kB' 'PageTables: 8524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7477116 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234484 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.883 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.883 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.884 07:56:29 -- setup/common.sh@33 -- # echo 0 00:03:58.884 07:56:29 -- setup/common.sh@33 -- # return 0 00:03:58.884 07:56:29 -- setup/hugepages.sh@99 -- # surp=0 00:03:58.884 07:56:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.884 07:56:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.884 07:56:29 -- setup/common.sh@18 -- # local node= 00:03:58.884 07:56:29 -- setup/common.sh@19 -- # local var val 00:03:58.884 07:56:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.884 07:56:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.884 07:56:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.884 07:56:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.884 07:56:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.884 07:56:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.884 07:56:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110867988 kB' 'MemAvailable: 114062260 kB' 'Buffers: 4132 kB' 'Cached: 9187452 kB' 'SwapCached: 0 kB' 'Active: 6264796 kB' 'Inactive: 3507332 kB' 'Active(anon): 5876540 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583796 kB' 'Mapped: 241368 kB' 'Shmem: 5295996 kB' 'KReclaimable: 246028 kB' 'Slab: 874200 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 628172 kB' 'KernelStack: 27216 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7477132 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234500 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.884 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.884 07:56:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.885 07:56:29 -- setup/common.sh@32 -- # continue 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.885 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 07:56:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.147 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.147 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 07:56:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.147 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.147 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 07:56:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.147 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.147 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 07:56:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.147 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.147 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.147 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.147 07:56:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.148 07:56:29 -- setup/common.sh@33 -- # echo 0 00:03:59.148 07:56:29 -- setup/common.sh@33 -- # return 0 00:03:59.148 07:56:29 -- setup/hugepages.sh@100 -- # resv=0 00:03:59.148 07:56:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:59.148 nr_hugepages=1024 00:03:59.148 07:56:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.148 resv_hugepages=0 00:03:59.148 07:56:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.148 surplus_hugepages=0 00:03:59.148 07:56:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.148 anon_hugepages=0 00:03:59.148 07:56:29 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.148 07:56:29 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:59.148 07:56:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.148 07:56:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.148 07:56:29 -- setup/common.sh@18 -- # local node= 00:03:59.148 07:56:29 -- setup/common.sh@19 -- # local var val 00:03:59.148 07:56:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.148 07:56:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.148 07:56:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.148 07:56:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.148 07:56:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.148 07:56:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338832 kB' 'MemFree: 110867764 kB' 'MemAvailable: 114062036 kB' 'Buffers: 4132 kB' 'Cached: 9187464 kB' 'SwapCached: 0 kB' 'Active: 6264736 kB' 'Inactive: 3507332 kB' 'Active(anon): 5876480 kB' 'Inactive(anon): 0 kB' 'Active(file): 388256 kB' 'Inactive(file): 3507332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 583712 kB' 'Mapped: 241368 kB' 'Shmem: 5296008 kB' 'KReclaimable: 246028 kB' 'Slab: 874200 kB' 'SReclaimable: 246028 kB' 'SUnreclaim: 628172 kB' 'KernelStack: 27200 kB' 'PageTables: 8472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509444 kB' 'Committed_AS: 7477148 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 234516 kB' 'VmallocChunk: 0 kB' 'Percpu: 99072 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1967476 kB' 'DirectMap2M: 12392448 kB' 'DirectMap1G: 121634816 kB' 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.148 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.148 07:56:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.149 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.149 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.149 07:56:29 -- setup/common.sh@33 -- # echo 1024 00:03:59.149 07:56:29 -- setup/common.sh@33 -- # return 0 00:03:59.149 07:56:29 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.149 07:56:29 -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.149 07:56:29 -- setup/hugepages.sh@27 -- # local node 00:03:59.149 07:56:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.150 07:56:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:59.150 07:56:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.150 07:56:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:59.150 07:56:29 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.150 07:56:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.150 07:56:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.150 07:56:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.150 07:56:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.150 07:56:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.150 07:56:29 -- setup/common.sh@18 -- # local node=0 00:03:59.150 07:56:29 -- setup/common.sh@19 -- # local var val 00:03:59.150 07:56:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.150 07:56:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.150 07:56:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.150 07:56:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.150 07:56:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.150 07:56:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54008156 kB' 'MemUsed: 11650852 kB' 'SwapCached: 0 kB' 'Active: 4355344 kB' 'Inactive: 3272260 kB' 'Active(anon): 4176568 kB' 'Inactive(anon): 0 kB' 'Active(file): 178776 kB' 'Inactive(file): 3272260 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7277860 kB' 'Mapped: 157368 kB' 'AnonPages: 352908 kB' 'Shmem: 3826824 kB' 'KernelStack: 14216 kB' 'PageTables: 5036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98144 kB' 'Slab: 416308 kB' 'SReclaimable: 98144 kB' 'SUnreclaim: 318164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.150 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.151 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 07:56:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.151 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.151 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 07:56:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.151 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.151 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 07:56:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.151 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.151 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 07:56:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.151 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.151 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.151 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.151 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.151 07:56:29 -- setup/common.sh@32 -- # continue 00:03:59.151 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.151 07:56:29 -- setup/common.sh@33 -- # echo 0 00:03:59.151 07:56:29 -- setup/common.sh@33 -- # return 0 00:03:59.151 07:56:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.151 07:56:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.151 07:56:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.151 07:56:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.151 07:56:29 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:59.151 node0=1024 expecting 1024 00:03:59.151 07:56:29 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:59.151 00:03:59.151 real 0m6.993s 00:03:59.151 user 0m2.784s 00:03:59.151 sys 0m4.300s 00:03:59.151 07:56:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.151 07:56:29 -- common/autotest_common.sh@10 -- # set +x 00:03:59.151 ************************************ 00:03:59.151 END TEST no_shrink_alloc 00:03:59.151 ************************************ 00:03:59.151 07:56:29 -- setup/hugepages.sh@217 -- # clear_hp 00:03:59.151 07:56:29 -- setup/hugepages.sh@37 -- # local node hp 00:03:59.151 07:56:29 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.151 07:56:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.151 07:56:29 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.151 07:56:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.151 07:56:29 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.151 07:56:29 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.151 07:56:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.151 07:56:29 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.151 07:56:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.151 07:56:29 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.151 07:56:29 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:59.151 07:56:29 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:59.151 00:03:59.151 real 0m25.987s 00:03:59.151 user 0m10.355s 00:03:59.151 sys 0m16.002s 00:03:59.151 07:56:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.151 07:56:29 -- common/autotest_common.sh@10 -- # set +x 00:03:59.151 ************************************ 00:03:59.151 END TEST hugepages 00:03:59.151 ************************************ 00:03:59.151 07:56:29 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:59.151 07:56:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:59.151 07:56:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:59.151 07:56:29 -- common/autotest_common.sh@10 -- # set +x 00:03:59.151 ************************************ 00:03:59.151 START TEST driver 00:03:59.151 ************************************ 00:03:59.151 07:56:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:59.151 * Looking for test storage... 00:03:59.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:59.151 07:56:29 -- setup/driver.sh@68 -- # setup reset 00:03:59.151 07:56:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.151 07:56:29 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:04.444 07:56:34 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:04.444 07:56:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:04.444 07:56:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:04.444 07:56:34 -- common/autotest_common.sh@10 -- # set +x 00:04:04.444 ************************************ 00:04:04.444 START TEST guess_driver 00:04:04.444 ************************************ 00:04:04.444 07:56:34 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:04.444 07:56:34 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:04.444 07:56:34 -- setup/driver.sh@47 -- # local fail=0 00:04:04.444 07:56:34 -- setup/driver.sh@49 -- # pick_driver 00:04:04.444 07:56:34 -- setup/driver.sh@36 -- # vfio 00:04:04.444 07:56:34 -- setup/driver.sh@21 -- # local iommu_grups 00:04:04.444 07:56:34 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:04.444 07:56:34 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:04.444 07:56:34 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:04.444 07:56:34 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:04.444 07:56:34 -- setup/driver.sh@29 -- # (( 322 > 0 )) 00:04:04.444 07:56:34 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:04.444 07:56:34 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:04.444 07:56:34 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:04.444 07:56:34 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:04.444 07:56:34 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:04.444 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:04.444 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:04.444 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:04.444 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:04.444 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:04.444 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:04.444 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:04.444 07:56:34 -- setup/driver.sh@30 -- # return 0 00:04:04.444 07:56:34 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:04.444 07:56:34 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:04.444 07:56:34 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:04.444 07:56:34 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:04.444 Looking for driver=vfio-pci 00:04:04.444 07:56:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.444 07:56:34 -- setup/driver.sh@45 -- # setup output config 00:04:04.444 07:56:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.444 07:56:34 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:07.746 07:56:37 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.746 07:56:37 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.746 07:56:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.746 07:56:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.746 07:56:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.746 07:56:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.746 07:56:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.746 07:56:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.746 07:56:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.746 07:56:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.746 07:56:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.746 07:56:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.746 07:56:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.746 07:56:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.746 07:56:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.746 07:56:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.746 07:56:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.746 07:56:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.746 07:56:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.746 07:56:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.746 07:56:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.746 07:56:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.746 07:56:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.746 07:56:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.746 07:56:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.746 07:56:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.746 07:56:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.746 07:56:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.746 07:56:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.746 07:56:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.746 07:56:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.746 07:56:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.746 07:56:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.746 07:56:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.746 07:56:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.746 07:56:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.746 07:56:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.746 07:56:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.746 07:56:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.746 07:56:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.746 07:56:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.746 07:56:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.746 07:56:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.746 07:56:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.746 07:56:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.746 07:56:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.746 07:56:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.746 07:56:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.746 07:56:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:07.746 07:56:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:07.746 07:56:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.746 07:56:38 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:07.746 07:56:38 -- setup/driver.sh@65 -- # setup reset 00:04:07.746 07:56:38 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.746 07:56:38 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.044 00:04:13.044 real 0m8.468s 00:04:13.044 user 0m2.822s 00:04:13.044 sys 0m4.892s 00:04:13.044 07:56:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.044 07:56:42 -- common/autotest_common.sh@10 -- # set +x 00:04:13.044 ************************************ 00:04:13.044 END TEST guess_driver 00:04:13.044 ************************************ 00:04:13.044 00:04:13.044 real 0m13.337s 00:04:13.044 user 0m4.355s 00:04:13.044 sys 0m7.486s 00:04:13.044 07:56:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.044 07:56:43 -- common/autotest_common.sh@10 -- # set +x 00:04:13.044 ************************************ 00:04:13.044 END TEST driver 00:04:13.044 ************************************ 00:04:13.044 07:56:43 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:13.044 07:56:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:13.044 07:56:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:13.044 07:56:43 -- common/autotest_common.sh@10 -- # set +x 00:04:13.044 ************************************ 00:04:13.044 START TEST devices 00:04:13.044 ************************************ 00:04:13.044 07:56:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:13.044 * Looking for test storage... 00:04:13.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:13.044 07:56:43 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:13.044 07:56:43 -- setup/devices.sh@192 -- # setup reset 00:04:13.044 07:56:43 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.044 07:56:43 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.345 07:56:46 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:16.345 07:56:46 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:16.345 07:56:46 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:16.346 07:56:46 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:16.346 07:56:46 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.346 07:56:46 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:16.346 07:56:46 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:16.346 07:56:46 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:16.346 07:56:46 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.346 07:56:46 -- setup/devices.sh@196 -- # blocks=() 00:04:16.346 07:56:46 -- setup/devices.sh@196 -- # declare -a blocks 00:04:16.346 07:56:46 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:16.346 07:56:46 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:16.346 07:56:46 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:16.346 07:56:46 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:16.346 07:56:46 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:16.346 07:56:46 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:16.346 07:56:46 -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:16.346 07:56:46 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:16.346 07:56:46 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:16.346 07:56:46 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:16.346 07:56:46 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:16.346 No valid GPT data, bailing 00:04:16.346 07:56:46 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:16.346 07:56:46 -- scripts/common.sh@393 -- # pt= 00:04:16.346 07:56:46 -- scripts/common.sh@394 -- # return 1 00:04:16.346 07:56:46 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:16.346 07:56:46 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:16.346 07:56:46 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:16.346 07:56:46 -- setup/common.sh@80 -- # echo 1920383410176 00:04:16.346 07:56:46 -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:16.346 07:56:46 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:16.346 07:56:46 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:16.346 07:56:46 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:16.346 07:56:46 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:16.346 07:56:46 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:16.346 07:56:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:16.346 07:56:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:16.346 07:56:46 -- common/autotest_common.sh@10 -- # set +x 00:04:16.346 ************************************ 00:04:16.346 START TEST nvme_mount 00:04:16.346 ************************************ 00:04:16.346 07:56:46 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:16.346 07:56:46 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:16.346 07:56:46 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:16.346 07:56:46 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:16.346 07:56:46 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:16.346 07:56:46 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:16.346 07:56:46 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:16.346 07:56:46 -- setup/common.sh@40 -- # local part_no=1 00:04:16.346 07:56:46 -- setup/common.sh@41 -- # local size=1073741824 00:04:16.346 07:56:46 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:16.346 07:56:46 -- setup/common.sh@44 -- # parts=() 00:04:16.346 07:56:46 -- setup/common.sh@44 -- # local parts 00:04:16.346 07:56:46 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:16.346 07:56:46 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.346 07:56:46 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:16.346 07:56:46 -- setup/common.sh@46 -- # (( part++ )) 00:04:16.346 07:56:46 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:16.346 07:56:46 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:16.346 07:56:46 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:16.346 07:56:46 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:17.730 Creating new GPT entries in memory. 00:04:17.730 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:17.730 other utilities. 00:04:17.730 07:56:47 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:17.730 07:56:47 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.730 07:56:47 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:17.730 07:56:47 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:17.730 07:56:47 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:18.673 Creating new GPT entries in memory. 00:04:18.673 The operation has completed successfully. 00:04:18.673 07:56:48 -- setup/common.sh@57 -- # (( part++ )) 00:04:18.673 07:56:48 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.673 07:56:48 -- setup/common.sh@62 -- # wait 813531 00:04:18.673 07:56:48 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.673 07:56:48 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:18.673 07:56:48 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.673 07:56:48 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:18.673 07:56:49 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:18.673 07:56:49 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.673 07:56:49 -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.673 07:56:49 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:18.673 07:56:49 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:18.673 07:56:49 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.673 07:56:49 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.673 07:56:49 -- setup/devices.sh@53 -- # local found=0 00:04:18.673 07:56:49 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.673 07:56:49 -- setup/devices.sh@56 -- # : 00:04:18.673 07:56:49 -- setup/devices.sh@59 -- # local pci status 00:04:18.673 07:56:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.673 07:56:49 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:18.673 07:56:49 -- setup/devices.sh@47 -- # setup output config 00:04:18.673 07:56:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.673 07:56:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:21.977 07:56:52 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.977 07:56:52 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:21.977 07:56:52 -- setup/devices.sh@63 -- # found=1 00:04:21.977 07:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.977 07:56:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.977 07:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.977 07:56:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.977 07:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.977 07:56:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.977 07:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.977 07:56:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.977 07:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.977 07:56:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.977 07:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.977 07:56:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.977 07:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.977 07:56:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.977 07:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.977 07:56:52 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.977 07:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.977 07:56:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.977 07:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.977 07:56:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.977 07:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.977 07:56:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.977 07:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.977 07:56:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.977 07:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.977 07:56:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.977 07:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.977 07:56:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.977 07:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.977 07:56:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.977 07:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.977 07:56:52 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.977 07:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.977 07:56:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:21.977 07:56:52 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:21.977 07:56:52 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.977 07:56:52 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:21.977 07:56:52 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:21.977 07:56:52 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:21.977 07:56:52 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.977 07:56:52 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.977 07:56:52 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:21.977 07:56:52 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:21.977 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:21.977 07:56:52 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:21.977 07:56:52 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:22.239 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:22.239 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:22.239 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:22.239 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:22.239 07:56:52 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:22.239 07:56:52 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:22.239 07:56:52 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.239 07:56:52 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:22.239 07:56:52 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:22.239 07:56:52 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.239 07:56:52 -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:22.239 07:56:52 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:22.239 07:56:52 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:22.239 07:56:52 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.239 07:56:52 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:22.239 07:56:52 -- setup/devices.sh@53 -- # local found=0 00:04:22.239 07:56:52 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:22.239 07:56:52 -- setup/devices.sh@56 -- # : 00:04:22.239 07:56:52 -- setup/devices.sh@59 -- # local pci status 00:04:22.239 07:56:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.239 07:56:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:22.239 07:56:52 -- setup/devices.sh@47 -- # setup output config 00:04:22.239 07:56:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.239 07:56:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:25.541 07:56:55 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.541 07:56:55 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:25.541 07:56:55 -- setup/devices.sh@63 -- # found=1 00:04:25.541 07:56:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.541 07:56:55 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.541 07:56:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.541 07:56:55 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.541 07:56:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.541 07:56:55 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.541 07:56:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.541 07:56:55 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.541 07:56:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.541 07:56:56 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.541 07:56:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.541 07:56:56 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.541 07:56:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.541 07:56:56 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.541 07:56:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.541 07:56:56 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.541 07:56:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.541 07:56:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.541 07:56:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.541 07:56:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.541 07:56:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.541 07:56:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.541 07:56:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.541 07:56:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.541 07:56:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.542 07:56:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.542 07:56:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.542 07:56:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.542 07:56:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.542 07:56:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.542 07:56:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.542 07:56:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.542 07:56:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.803 07:56:56 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.803 07:56:56 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:25.803 07:56:56 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.803 07:56:56 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:25.803 07:56:56 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.803 07:56:56 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.803 07:56:56 -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:25.803 07:56:56 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:25.803 07:56:56 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:25.803 07:56:56 -- setup/devices.sh@50 -- # local mount_point= 00:04:25.803 07:56:56 -- setup/devices.sh@51 -- # local test_file= 00:04:25.803 07:56:56 -- setup/devices.sh@53 -- # local found=0 00:04:25.803 07:56:56 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:25.803 07:56:56 -- setup/devices.sh@59 -- # local pci status 00:04:25.803 07:56:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.803 07:56:56 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:25.803 07:56:56 -- setup/devices.sh@47 -- # setup output config 00:04:25.803 07:56:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.803 07:56:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:29.105 07:56:59 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.105 07:56:59 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:29.105 07:56:59 -- setup/devices.sh@63 -- # found=1 00:04:29.105 07:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.105 07:56:59 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.105 07:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.105 07:56:59 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.105 07:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.105 07:56:59 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.106 07:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.106 07:56:59 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.106 07:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.106 07:56:59 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.106 07:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.106 07:56:59 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.106 07:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.106 07:56:59 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.106 07:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.106 07:56:59 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.106 07:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.106 07:56:59 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.106 07:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.106 07:56:59 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.106 07:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.106 07:56:59 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.106 07:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.106 07:56:59 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.106 07:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.106 07:56:59 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.106 07:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.106 07:56:59 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.106 07:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.106 07:56:59 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.106 07:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.106 07:56:59 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:29.106 07:56:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.106 07:56:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.106 07:56:59 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:29.106 07:56:59 -- setup/devices.sh@68 -- # return 0 00:04:29.106 07:56:59 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:29.106 07:56:59 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.106 07:56:59 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:29.106 07:56:59 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:29.106 07:56:59 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:29.106 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:29.106 00:04:29.106 real 0m12.797s 00:04:29.106 user 0m3.965s 00:04:29.106 sys 0m6.739s 00:04:29.106 07:56:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.106 07:56:59 -- common/autotest_common.sh@10 -- # set +x 00:04:29.106 ************************************ 00:04:29.106 END TEST nvme_mount 00:04:29.106 ************************************ 00:04:29.367 07:56:59 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:29.367 07:56:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:29.367 07:56:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:29.367 07:56:59 -- common/autotest_common.sh@10 -- # set +x 00:04:29.367 ************************************ 00:04:29.367 START TEST dm_mount 00:04:29.367 ************************************ 00:04:29.367 07:56:59 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:29.367 07:56:59 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:29.367 07:56:59 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:29.367 07:56:59 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:29.367 07:56:59 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:29.367 07:56:59 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:29.367 07:56:59 -- setup/common.sh@40 -- # local part_no=2 00:04:29.367 07:56:59 -- setup/common.sh@41 -- # local size=1073741824 00:04:29.367 07:56:59 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:29.367 07:56:59 -- setup/common.sh@44 -- # parts=() 00:04:29.367 07:56:59 -- setup/common.sh@44 -- # local parts 00:04:29.367 07:56:59 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:29.367 07:56:59 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.367 07:56:59 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:29.367 07:56:59 -- setup/common.sh@46 -- # (( part++ )) 00:04:29.367 07:56:59 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.367 07:56:59 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:29.367 07:56:59 -- setup/common.sh@46 -- # (( part++ )) 00:04:29.367 07:56:59 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.367 07:56:59 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:29.367 07:56:59 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:29.367 07:56:59 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:30.309 Creating new GPT entries in memory. 00:04:30.310 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:30.310 other utilities. 00:04:30.310 07:57:00 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:30.310 07:57:00 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.310 07:57:00 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:30.310 07:57:00 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:30.310 07:57:00 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:31.273 Creating new GPT entries in memory. 00:04:31.273 The operation has completed successfully. 00:04:31.273 07:57:01 -- setup/common.sh@57 -- # (( part++ )) 00:04:31.273 07:57:01 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.273 07:57:01 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:31.274 07:57:01 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:31.274 07:57:01 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:32.215 The operation has completed successfully. 00:04:32.215 07:57:02 -- setup/common.sh@57 -- # (( part++ )) 00:04:32.215 07:57:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:32.215 07:57:02 -- setup/common.sh@62 -- # wait 818712 00:04:32.476 07:57:02 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:32.476 07:57:02 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:32.476 07:57:02 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:32.476 07:57:02 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:32.476 07:57:02 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:32.476 07:57:02 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:32.476 07:57:02 -- setup/devices.sh@161 -- # break 00:04:32.476 07:57:02 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:32.476 07:57:02 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:32.476 07:57:02 -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:04:32.476 07:57:02 -- setup/devices.sh@166 -- # dm=dm-1 00:04:32.476 07:57:02 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:04:32.476 07:57:02 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:04:32.476 07:57:02 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:32.476 07:57:02 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:32.476 07:57:02 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:32.476 07:57:02 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:32.476 07:57:02 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:32.476 07:57:02 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:32.476 07:57:02 -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:32.476 07:57:02 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:32.476 07:57:02 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:32.476 07:57:02 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:32.476 07:57:02 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:32.476 07:57:02 -- setup/devices.sh@53 -- # local found=0 00:04:32.476 07:57:02 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:32.476 07:57:02 -- setup/devices.sh@56 -- # : 00:04:32.476 07:57:02 -- setup/devices.sh@59 -- # local pci status 00:04:32.476 07:57:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.476 07:57:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:32.476 07:57:02 -- setup/devices.sh@47 -- # setup output config 00:04:32.476 07:57:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.476 07:57:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:35.779 07:57:06 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.779 07:57:06 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:35.779 07:57:06 -- setup/devices.sh@63 -- # found=1 00:04:35.779 07:57:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.779 07:57:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.779 07:57:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.779 07:57:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.779 07:57:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.779 07:57:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.779 07:57:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.779 07:57:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.779 07:57:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.779 07:57:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.779 07:57:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.779 07:57:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.779 07:57:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.779 07:57:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.779 07:57:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.779 07:57:06 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.779 07:57:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.779 07:57:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.779 07:57:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.779 07:57:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.779 07:57:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.779 07:57:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.779 07:57:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.779 07:57:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.779 07:57:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.779 07:57:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.779 07:57:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.779 07:57:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.779 07:57:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.779 07:57:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.779 07:57:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.779 07:57:06 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.779 07:57:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.041 07:57:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.041 07:57:06 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:36.041 07:57:06 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.041 07:57:06 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:36.041 07:57:06 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:36.041 07:57:06 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.041 07:57:06 -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:04:36.041 07:57:06 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:36.041 07:57:06 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:04:36.041 07:57:06 -- setup/devices.sh@50 -- # local mount_point= 00:04:36.041 07:57:06 -- setup/devices.sh@51 -- # local test_file= 00:04:36.041 07:57:06 -- setup/devices.sh@53 -- # local found=0 00:04:36.041 07:57:06 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:36.041 07:57:06 -- setup/devices.sh@59 -- # local pci status 00:04:36.041 07:57:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.041 07:57:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:36.041 07:57:06 -- setup/devices.sh@47 -- # setup output config 00:04:36.041 07:57:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.041 07:57:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:39.344 07:57:09 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.344 07:57:09 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:04:39.344 07:57:09 -- setup/devices.sh@63 -- # found=1 00:04:39.344 07:57:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.344 07:57:09 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.344 07:57:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.344 07:57:09 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.344 07:57:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.344 07:57:09 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.344 07:57:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.344 07:57:09 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.344 07:57:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.344 07:57:09 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.344 07:57:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.344 07:57:09 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.344 07:57:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.344 07:57:09 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.344 07:57:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.344 07:57:09 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.344 07:57:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.344 07:57:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.344 07:57:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.344 07:57:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.344 07:57:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.344 07:57:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.344 07:57:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.344 07:57:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.344 07:57:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.344 07:57:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.344 07:57:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.344 07:57:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.344 07:57:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.344 07:57:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.344 07:57:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.344 07:57:09 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.344 07:57:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.606 07:57:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.606 07:57:10 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:39.606 07:57:10 -- setup/devices.sh@68 -- # return 0 00:04:39.606 07:57:10 -- setup/devices.sh@187 -- # cleanup_dm 00:04:39.606 07:57:10 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:39.606 07:57:10 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:39.606 07:57:10 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:39.606 07:57:10 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.606 07:57:10 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:39.606 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:39.606 07:57:10 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:39.606 07:57:10 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:39.606 00:04:39.606 real 0m10.338s 00:04:39.606 user 0m2.842s 00:04:39.606 sys 0m4.581s 00:04:39.606 07:57:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.606 07:57:10 -- common/autotest_common.sh@10 -- # set +x 00:04:39.606 ************************************ 00:04:39.606 END TEST dm_mount 00:04:39.606 ************************************ 00:04:39.606 07:57:10 -- setup/devices.sh@1 -- # cleanup 00:04:39.606 07:57:10 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:39.606 07:57:10 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.606 07:57:10 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.606 07:57:10 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:39.606 07:57:10 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.606 07:57:10 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:39.867 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:39.867 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:39.867 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:39.867 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:39.867 07:57:10 -- setup/devices.sh@12 -- # cleanup_dm 00:04:39.867 07:57:10 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:39.867 07:57:10 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:39.867 07:57:10 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.867 07:57:10 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:39.868 07:57:10 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.868 07:57:10 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:39.868 00:04:39.868 real 0m27.379s 00:04:39.868 user 0m8.260s 00:04:39.868 sys 0m13.984s 00:04:39.868 07:57:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.868 07:57:10 -- common/autotest_common.sh@10 -- # set +x 00:04:39.868 ************************************ 00:04:39.868 END TEST devices 00:04:39.868 ************************************ 00:04:39.868 00:04:39.868 real 1m31.860s 00:04:39.868 user 0m31.471s 00:04:39.868 sys 0m52.048s 00:04:39.868 07:57:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.868 07:57:10 -- common/autotest_common.sh@10 -- # set +x 00:04:39.868 ************************************ 00:04:39.868 END TEST setup.sh 00:04:39.868 ************************************ 00:04:40.129 07:57:10 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:43.426 Hugepages 00:04:43.426 node hugesize free / total 00:04:43.426 node0 1048576kB 0 / 0 00:04:43.426 node0 2048kB 2048 / 2048 00:04:43.427 node1 1048576kB 0 / 0 00:04:43.427 node1 2048kB 0 / 0 00:04:43.427 00:04:43.427 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:43.427 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:43.427 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:43.427 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:43.427 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:43.427 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:43.427 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:43.427 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:43.427 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:43.427 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:43.427 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:43.427 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:43.427 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:43.427 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:43.427 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:43.427 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:43.427 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:43.427 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:43.427 07:57:13 -- spdk/autotest.sh@141 -- # uname -s 00:04:43.427 07:57:13 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:43.427 07:57:13 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:43.427 07:57:13 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:47.627 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:47.627 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:47.627 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:47.627 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:47.627 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:47.627 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:47.627 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:47.627 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:47.627 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:47.627 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:47.627 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:47.627 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:47.627 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:47.627 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:47.627 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:47.627 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:49.014 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:49.014 07:57:19 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:49.953 07:57:20 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:49.953 07:57:20 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:49.953 07:57:20 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:49.953 07:57:20 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:49.953 07:57:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:49.953 07:57:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:49.953 07:57:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:49.953 07:57:20 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:49.953 07:57:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:49.953 07:57:20 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:49.953 07:57:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:49.953 07:57:20 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.251 Waiting for block devices as requested 00:04:53.251 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:53.510 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:53.510 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:53.510 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:53.769 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:53.769 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:53.769 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:54.029 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:54.029 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:54.289 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:54.289 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:54.289 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:54.289 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:54.549 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:54.549 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:54.549 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:54.549 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:54.809 07:57:25 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:54.809 07:57:25 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:54.809 07:57:25 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:54.809 07:57:25 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:04:54.809 07:57:25 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:54.809 07:57:25 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:54.809 07:57:25 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:54.809 07:57:25 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:54.809 07:57:25 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:54.809 07:57:25 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:54.809 07:57:25 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:54.809 07:57:25 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:54.809 07:57:25 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:54.809 07:57:25 -- common/autotest_common.sh@1530 -- # oacs=' 0x5f' 00:04:54.809 07:57:25 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:54.809 07:57:25 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:54.809 07:57:25 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:54.809 07:57:25 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:54.809 07:57:25 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:54.809 07:57:25 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:54.810 07:57:25 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:54.810 07:57:25 -- common/autotest_common.sh@1542 -- # continue 00:04:54.810 07:57:25 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:54.810 07:57:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:54.810 07:57:25 -- common/autotest_common.sh@10 -- # set +x 00:04:54.810 07:57:25 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:54.810 07:57:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:54.810 07:57:25 -- common/autotest_common.sh@10 -- # set +x 00:04:54.810 07:57:25 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:58.103 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:58.103 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:58.103 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:58.103 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:58.103 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:58.103 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:58.103 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:58.103 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:58.103 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:58.103 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:58.103 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:58.103 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:58.103 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:58.103 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:58.103 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:58.103 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:58.364 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:58.364 07:57:28 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:58.364 07:57:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:58.364 07:57:28 -- common/autotest_common.sh@10 -- # set +x 00:04:58.364 07:57:28 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:58.364 07:57:28 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:58.364 07:57:28 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:58.364 07:57:28 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:58.364 07:57:28 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:58.364 07:57:28 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:58.364 07:57:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:58.364 07:57:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:58.364 07:57:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.364 07:57:28 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:58.364 07:57:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:58.624 07:57:29 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:58.624 07:57:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:58.624 07:57:29 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:58.624 07:57:29 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:58.624 07:57:29 -- common/autotest_common.sh@1565 -- # device=0xa80a 00:04:58.625 07:57:29 -- common/autotest_common.sh@1566 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:58.625 07:57:29 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:04:58.625 07:57:29 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:58.625 07:57:29 -- common/autotest_common.sh@1578 -- # return 0 00:04:58.625 07:57:29 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:58.625 07:57:29 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:58.625 07:57:29 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:58.625 07:57:29 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:58.625 07:57:29 -- spdk/autotest.sh@173 -- # timing_enter lib 00:04:58.625 07:57:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:58.625 07:57:29 -- common/autotest_common.sh@10 -- # set +x 00:04:58.625 07:57:29 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:58.625 07:57:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.625 07:57:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.625 07:57:29 -- common/autotest_common.sh@10 -- # set +x 00:04:58.625 ************************************ 00:04:58.625 START TEST env 00:04:58.625 ************************************ 00:04:58.625 07:57:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:58.625 * Looking for test storage... 00:04:58.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:58.625 07:57:29 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:58.625 07:57:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.625 07:57:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.625 07:57:29 -- common/autotest_common.sh@10 -- # set +x 00:04:58.625 ************************************ 00:04:58.625 START TEST env_memory 00:04:58.625 ************************************ 00:04:58.625 07:57:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:58.625 00:04:58.625 00:04:58.625 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.625 http://cunit.sourceforge.net/ 00:04:58.625 00:04:58.625 00:04:58.625 Suite: memory 00:04:58.625 Test: alloc and free memory map ...[2024-06-11 07:57:29.218405] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:58.625 passed 00:04:58.625 Test: mem map translation ...[2024-06-11 07:57:29.244102] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:58.625 [2024-06-11 07:57:29.244133] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:58.625 [2024-06-11 07:57:29.244182] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:58.625 [2024-06-11 07:57:29.244191] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:58.885 passed 00:04:58.885 Test: mem map registration ...[2024-06-11 07:57:29.299390] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:58.885 [2024-06-11 07:57:29.299413] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:58.885 passed 00:04:58.885 Test: mem map adjacent registrations ...passed 00:04:58.885 00:04:58.885 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.885 suites 1 1 n/a 0 0 00:04:58.885 tests 4 4 4 0 0 00:04:58.885 asserts 152 152 152 0 n/a 00:04:58.885 00:04:58.886 Elapsed time = 0.195 seconds 00:04:58.886 00:04:58.886 real 0m0.208s 00:04:58.886 user 0m0.199s 00:04:58.886 sys 0m0.008s 00:04:58.886 07:57:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.886 07:57:29 -- common/autotest_common.sh@10 -- # set +x 00:04:58.886 ************************************ 00:04:58.886 END TEST env_memory 00:04:58.886 ************************************ 00:04:58.886 07:57:29 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:58.886 07:57:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.886 07:57:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.886 07:57:29 -- common/autotest_common.sh@10 -- # set +x 00:04:58.886 ************************************ 00:04:58.886 START TEST env_vtophys 00:04:58.886 ************************************ 00:04:58.886 07:57:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:58.886 EAL: lib.eal log level changed from notice to debug 00:04:58.886 EAL: Detected lcore 0 as core 0 on socket 0 00:04:58.886 EAL: Detected lcore 1 as core 1 on socket 0 00:04:58.886 EAL: Detected lcore 2 as core 2 on socket 0 00:04:58.886 EAL: Detected lcore 3 as core 3 on socket 0 00:04:58.886 EAL: Detected lcore 4 as core 4 on socket 0 00:04:58.886 EAL: Detected lcore 5 as core 5 on socket 0 00:04:58.886 EAL: Detected lcore 6 as core 6 on socket 0 00:04:58.886 EAL: Detected lcore 7 as core 7 on socket 0 00:04:58.886 EAL: Detected lcore 8 as core 8 on socket 0 00:04:58.886 EAL: Detected lcore 9 as core 9 on socket 0 00:04:58.886 EAL: Detected lcore 10 as core 10 on socket 0 00:04:58.886 EAL: Detected lcore 11 as core 11 on socket 0 00:04:58.886 EAL: Detected lcore 12 as core 12 on socket 0 00:04:58.886 EAL: Detected lcore 13 as core 13 on socket 0 00:04:58.886 EAL: Detected lcore 14 as core 14 on socket 0 00:04:58.886 EAL: Detected lcore 15 as core 15 on socket 0 00:04:58.886 EAL: Detected lcore 16 as core 16 on socket 0 00:04:58.886 EAL: Detected lcore 17 as core 17 on socket 0 00:04:58.886 EAL: Detected lcore 18 as core 18 on socket 0 00:04:58.886 EAL: Detected lcore 19 as core 19 on socket 0 00:04:58.886 EAL: Detected lcore 20 as core 20 on socket 0 00:04:58.886 EAL: Detected lcore 21 as core 21 on socket 0 00:04:58.886 EAL: Detected lcore 22 as core 22 on socket 0 00:04:58.886 EAL: Detected lcore 23 as core 23 on socket 0 00:04:58.886 EAL: Detected lcore 24 as core 24 on socket 0 00:04:58.886 EAL: Detected lcore 25 as core 25 on socket 0 00:04:58.886 EAL: Detected lcore 26 as core 26 on socket 0 00:04:58.886 EAL: Detected lcore 27 as core 27 on socket 0 00:04:58.886 EAL: Detected lcore 28 as core 28 on socket 0 00:04:58.886 EAL: Detected lcore 29 as core 29 on socket 0 00:04:58.886 EAL: Detected lcore 30 as core 30 on socket 0 00:04:58.886 EAL: Detected lcore 31 as core 31 on socket 0 00:04:58.886 EAL: Detected lcore 32 as core 32 on socket 0 00:04:58.886 EAL: Detected lcore 33 as core 33 on socket 0 00:04:58.886 EAL: Detected lcore 34 as core 34 on socket 0 00:04:58.886 EAL: Detected lcore 35 as core 35 on socket 0 00:04:58.886 EAL: Detected lcore 36 as core 0 on socket 1 00:04:58.886 EAL: Detected lcore 37 as core 1 on socket 1 00:04:58.886 EAL: Detected lcore 38 as core 2 on socket 1 00:04:58.886 EAL: Detected lcore 39 as core 3 on socket 1 00:04:58.886 EAL: Detected lcore 40 as core 4 on socket 1 00:04:58.886 EAL: Detected lcore 41 as core 5 on socket 1 00:04:58.886 EAL: Detected lcore 42 as core 6 on socket 1 00:04:58.886 EAL: Detected lcore 43 as core 7 on socket 1 00:04:58.886 EAL: Detected lcore 44 as core 8 on socket 1 00:04:58.886 EAL: Detected lcore 45 as core 9 on socket 1 00:04:58.886 EAL: Detected lcore 46 as core 10 on socket 1 00:04:58.886 EAL: Detected lcore 47 as core 11 on socket 1 00:04:58.886 EAL: Detected lcore 48 as core 12 on socket 1 00:04:58.886 EAL: Detected lcore 49 as core 13 on socket 1 00:04:58.886 EAL: Detected lcore 50 as core 14 on socket 1 00:04:58.886 EAL: Detected lcore 51 as core 15 on socket 1 00:04:58.886 EAL: Detected lcore 52 as core 16 on socket 1 00:04:58.886 EAL: Detected lcore 53 as core 17 on socket 1 00:04:58.886 EAL: Detected lcore 54 as core 18 on socket 1 00:04:58.886 EAL: Detected lcore 55 as core 19 on socket 1 00:04:58.886 EAL: Detected lcore 56 as core 20 on socket 1 00:04:58.886 EAL: Detected lcore 57 as core 21 on socket 1 00:04:58.886 EAL: Detected lcore 58 as core 22 on socket 1 00:04:58.886 EAL: Detected lcore 59 as core 23 on socket 1 00:04:58.886 EAL: Detected lcore 60 as core 24 on socket 1 00:04:58.886 EAL: Detected lcore 61 as core 25 on socket 1 00:04:58.886 EAL: Detected lcore 62 as core 26 on socket 1 00:04:58.886 EAL: Detected lcore 63 as core 27 on socket 1 00:04:58.886 EAL: Detected lcore 64 as core 28 on socket 1 00:04:58.886 EAL: Detected lcore 65 as core 29 on socket 1 00:04:58.886 EAL: Detected lcore 66 as core 30 on socket 1 00:04:58.886 EAL: Detected lcore 67 as core 31 on socket 1 00:04:58.886 EAL: Detected lcore 68 as core 32 on socket 1 00:04:58.886 EAL: Detected lcore 69 as core 33 on socket 1 00:04:58.886 EAL: Detected lcore 70 as core 34 on socket 1 00:04:58.886 EAL: Detected lcore 71 as core 35 on socket 1 00:04:58.886 EAL: Detected lcore 72 as core 0 on socket 0 00:04:58.886 EAL: Detected lcore 73 as core 1 on socket 0 00:04:58.886 EAL: Detected lcore 74 as core 2 on socket 0 00:04:58.886 EAL: Detected lcore 75 as core 3 on socket 0 00:04:58.886 EAL: Detected lcore 76 as core 4 on socket 0 00:04:58.886 EAL: Detected lcore 77 as core 5 on socket 0 00:04:58.886 EAL: Detected lcore 78 as core 6 on socket 0 00:04:58.886 EAL: Detected lcore 79 as core 7 on socket 0 00:04:58.886 EAL: Detected lcore 80 as core 8 on socket 0 00:04:58.886 EAL: Detected lcore 81 as core 9 on socket 0 00:04:58.886 EAL: Detected lcore 82 as core 10 on socket 0 00:04:58.886 EAL: Detected lcore 83 as core 11 on socket 0 00:04:58.886 EAL: Detected lcore 84 as core 12 on socket 0 00:04:58.886 EAL: Detected lcore 85 as core 13 on socket 0 00:04:58.886 EAL: Detected lcore 86 as core 14 on socket 0 00:04:58.886 EAL: Detected lcore 87 as core 15 on socket 0 00:04:58.886 EAL: Detected lcore 88 as core 16 on socket 0 00:04:58.886 EAL: Detected lcore 89 as core 17 on socket 0 00:04:58.886 EAL: Detected lcore 90 as core 18 on socket 0 00:04:58.886 EAL: Detected lcore 91 as core 19 on socket 0 00:04:58.886 EAL: Detected lcore 92 as core 20 on socket 0 00:04:58.886 EAL: Detected lcore 93 as core 21 on socket 0 00:04:58.886 EAL: Detected lcore 94 as core 22 on socket 0 00:04:58.886 EAL: Detected lcore 95 as core 23 on socket 0 00:04:58.886 EAL: Detected lcore 96 as core 24 on socket 0 00:04:58.886 EAL: Detected lcore 97 as core 25 on socket 0 00:04:58.886 EAL: Detected lcore 98 as core 26 on socket 0 00:04:58.886 EAL: Detected lcore 99 as core 27 on socket 0 00:04:58.886 EAL: Detected lcore 100 as core 28 on socket 0 00:04:58.886 EAL: Detected lcore 101 as core 29 on socket 0 00:04:58.886 EAL: Detected lcore 102 as core 30 on socket 0 00:04:58.886 EAL: Detected lcore 103 as core 31 on socket 0 00:04:58.886 EAL: Detected lcore 104 as core 32 on socket 0 00:04:58.886 EAL: Detected lcore 105 as core 33 on socket 0 00:04:58.886 EAL: Detected lcore 106 as core 34 on socket 0 00:04:58.886 EAL: Detected lcore 107 as core 35 on socket 0 00:04:58.886 EAL: Detected lcore 108 as core 0 on socket 1 00:04:58.886 EAL: Detected lcore 109 as core 1 on socket 1 00:04:58.886 EAL: Detected lcore 110 as core 2 on socket 1 00:04:58.886 EAL: Detected lcore 111 as core 3 on socket 1 00:04:58.886 EAL: Detected lcore 112 as core 4 on socket 1 00:04:58.886 EAL: Detected lcore 113 as core 5 on socket 1 00:04:58.886 EAL: Detected lcore 114 as core 6 on socket 1 00:04:58.886 EAL: Detected lcore 115 as core 7 on socket 1 00:04:58.886 EAL: Detected lcore 116 as core 8 on socket 1 00:04:58.886 EAL: Detected lcore 117 as core 9 on socket 1 00:04:58.886 EAL: Detected lcore 118 as core 10 on socket 1 00:04:58.886 EAL: Detected lcore 119 as core 11 on socket 1 00:04:58.886 EAL: Detected lcore 120 as core 12 on socket 1 00:04:58.886 EAL: Detected lcore 121 as core 13 on socket 1 00:04:58.886 EAL: Detected lcore 122 as core 14 on socket 1 00:04:58.886 EAL: Detected lcore 123 as core 15 on socket 1 00:04:58.886 EAL: Detected lcore 124 as core 16 on socket 1 00:04:58.886 EAL: Detected lcore 125 as core 17 on socket 1 00:04:58.886 EAL: Detected lcore 126 as core 18 on socket 1 00:04:58.886 EAL: Detected lcore 127 as core 19 on socket 1 00:04:58.886 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:58.886 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:58.886 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:58.886 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:58.886 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:58.886 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:58.886 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:58.886 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:58.886 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:58.886 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:58.886 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:58.886 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:58.886 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:58.886 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:58.886 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:58.886 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:58.886 EAL: Maximum logical cores by configuration: 128 00:04:58.886 EAL: Detected CPU lcores: 128 00:04:58.886 EAL: Detected NUMA nodes: 2 00:04:58.886 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:58.886 EAL: Detected shared linkage of DPDK 00:04:58.886 EAL: No shared files mode enabled, IPC will be disabled 00:04:58.886 EAL: Bus pci wants IOVA as 'DC' 00:04:58.886 EAL: Buses did not request a specific IOVA mode. 00:04:58.886 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:58.886 EAL: Selected IOVA mode 'VA' 00:04:58.886 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.886 EAL: Probing VFIO support... 00:04:58.886 EAL: IOMMU type 1 (Type 1) is supported 00:04:58.886 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:58.886 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:58.886 EAL: VFIO support initialized 00:04:58.886 EAL: Ask a virtual area of 0x2e000 bytes 00:04:58.886 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:58.886 EAL: Setting up physically contiguous memory... 00:04:58.886 EAL: Setting maximum number of open files to 524288 00:04:58.886 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:58.886 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:58.886 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:58.886 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.886 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:58.886 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.886 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.887 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:58.887 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:58.887 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.887 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:58.887 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.887 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.887 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:58.887 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:58.887 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.887 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:58.887 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.887 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.887 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:58.887 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:58.887 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.887 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:58.887 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.887 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.887 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:58.887 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:58.887 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:58.887 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.887 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:58.887 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:58.887 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.887 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:58.887 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:58.887 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.887 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:58.887 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:58.887 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.887 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:58.887 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:58.887 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.887 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:58.887 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:58.887 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.887 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:58.887 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:58.887 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.887 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:58.887 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:58.887 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.887 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:58.887 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:58.887 EAL: Hugepages will be freed exactly as allocated. 00:04:58.887 EAL: No shared files mode enabled, IPC is disabled 00:04:58.887 EAL: No shared files mode enabled, IPC is disabled 00:04:58.887 EAL: TSC frequency is ~2400000 KHz 00:04:58.887 EAL: Main lcore 0 is ready (tid=7f0e1a562a00;cpuset=[0]) 00:04:58.887 EAL: Trying to obtain current memory policy. 00:04:58.887 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.887 EAL: Restoring previous memory policy: 0 00:04:58.887 EAL: request: mp_malloc_sync 00:04:58.887 EAL: No shared files mode enabled, IPC is disabled 00:04:58.887 EAL: Heap on socket 0 was expanded by 2MB 00:04:58.887 EAL: No shared files mode enabled, IPC is disabled 00:04:58.887 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:58.887 EAL: Mem event callback 'spdk:(nil)' registered 00:04:58.887 00:04:58.887 00:04:58.887 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.887 http://cunit.sourceforge.net/ 00:04:58.887 00:04:58.887 00:04:58.887 Suite: components_suite 00:04:58.887 Test: vtophys_malloc_test ...passed 00:04:58.887 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:58.887 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.887 EAL: Restoring previous memory policy: 4 00:04:58.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.887 EAL: request: mp_malloc_sync 00:04:58.887 EAL: No shared files mode enabled, IPC is disabled 00:04:58.887 EAL: Heap on socket 0 was expanded by 4MB 00:04:58.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.887 EAL: request: mp_malloc_sync 00:04:58.887 EAL: No shared files mode enabled, IPC is disabled 00:04:58.887 EAL: Heap on socket 0 was shrunk by 4MB 00:04:58.887 EAL: Trying to obtain current memory policy. 00:04:58.887 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.887 EAL: Restoring previous memory policy: 4 00:04:58.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.887 EAL: request: mp_malloc_sync 00:04:58.887 EAL: No shared files mode enabled, IPC is disabled 00:04:58.887 EAL: Heap on socket 0 was expanded by 6MB 00:04:58.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.887 EAL: request: mp_malloc_sync 00:04:58.887 EAL: No shared files mode enabled, IPC is disabled 00:04:58.887 EAL: Heap on socket 0 was shrunk by 6MB 00:04:58.887 EAL: Trying to obtain current memory policy. 00:04:58.887 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.887 EAL: Restoring previous memory policy: 4 00:04:58.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.887 EAL: request: mp_malloc_sync 00:04:58.887 EAL: No shared files mode enabled, IPC is disabled 00:04:58.887 EAL: Heap on socket 0 was expanded by 10MB 00:04:58.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.887 EAL: request: mp_malloc_sync 00:04:58.887 EAL: No shared files mode enabled, IPC is disabled 00:04:58.887 EAL: Heap on socket 0 was shrunk by 10MB 00:04:58.887 EAL: Trying to obtain current memory policy. 00:04:58.887 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.887 EAL: Restoring previous memory policy: 4 00:04:58.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.887 EAL: request: mp_malloc_sync 00:04:58.887 EAL: No shared files mode enabled, IPC is disabled 00:04:58.887 EAL: Heap on socket 0 was expanded by 18MB 00:04:58.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.887 EAL: request: mp_malloc_sync 00:04:58.887 EAL: No shared files mode enabled, IPC is disabled 00:04:58.887 EAL: Heap on socket 0 was shrunk by 18MB 00:04:58.887 EAL: Trying to obtain current memory policy. 00:04:58.887 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.887 EAL: Restoring previous memory policy: 4 00:04:58.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.887 EAL: request: mp_malloc_sync 00:04:58.887 EAL: No shared files mode enabled, IPC is disabled 00:04:58.887 EAL: Heap on socket 0 was expanded by 34MB 00:04:58.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.887 EAL: request: mp_malloc_sync 00:04:58.887 EAL: No shared files mode enabled, IPC is disabled 00:04:58.887 EAL: Heap on socket 0 was shrunk by 34MB 00:04:58.887 EAL: Trying to obtain current memory policy. 00:04:58.887 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.887 EAL: Restoring previous memory policy: 4 00:04:58.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.887 EAL: request: mp_malloc_sync 00:04:58.887 EAL: No shared files mode enabled, IPC is disabled 00:04:58.887 EAL: Heap on socket 0 was expanded by 66MB 00:04:58.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.147 EAL: request: mp_malloc_sync 00:04:59.147 EAL: No shared files mode enabled, IPC is disabled 00:04:59.147 EAL: Heap on socket 0 was shrunk by 66MB 00:04:59.147 EAL: Trying to obtain current memory policy. 00:04:59.147 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.147 EAL: Restoring previous memory policy: 4 00:04:59.147 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.147 EAL: request: mp_malloc_sync 00:04:59.147 EAL: No shared files mode enabled, IPC is disabled 00:04:59.148 EAL: Heap on socket 0 was expanded by 130MB 00:04:59.148 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.148 EAL: request: mp_malloc_sync 00:04:59.148 EAL: No shared files mode enabled, IPC is disabled 00:04:59.148 EAL: Heap on socket 0 was shrunk by 130MB 00:04:59.148 EAL: Trying to obtain current memory policy. 00:04:59.148 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.148 EAL: Restoring previous memory policy: 4 00:04:59.148 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.148 EAL: request: mp_malloc_sync 00:04:59.148 EAL: No shared files mode enabled, IPC is disabled 00:04:59.148 EAL: Heap on socket 0 was expanded by 258MB 00:04:59.148 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.148 EAL: request: mp_malloc_sync 00:04:59.148 EAL: No shared files mode enabled, IPC is disabled 00:04:59.148 EAL: Heap on socket 0 was shrunk by 258MB 00:04:59.148 EAL: Trying to obtain current memory policy. 00:04:59.148 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.148 EAL: Restoring previous memory policy: 4 00:04:59.148 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.148 EAL: request: mp_malloc_sync 00:04:59.148 EAL: No shared files mode enabled, IPC is disabled 00:04:59.148 EAL: Heap on socket 0 was expanded by 514MB 00:04:59.148 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.407 EAL: request: mp_malloc_sync 00:04:59.407 EAL: No shared files mode enabled, IPC is disabled 00:04:59.407 EAL: Heap on socket 0 was shrunk by 514MB 00:04:59.407 EAL: Trying to obtain current memory policy. 00:04:59.407 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.407 EAL: Restoring previous memory policy: 4 00:04:59.407 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.407 EAL: request: mp_malloc_sync 00:04:59.407 EAL: No shared files mode enabled, IPC is disabled 00:04:59.408 EAL: Heap on socket 0 was expanded by 1026MB 00:04:59.667 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.667 EAL: request: mp_malloc_sync 00:04:59.667 EAL: No shared files mode enabled, IPC is disabled 00:04:59.667 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:59.667 passed 00:04:59.667 00:04:59.667 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.667 suites 1 1 n/a 0 0 00:04:59.667 tests 2 2 2 0 0 00:04:59.667 asserts 497 497 497 0 n/a 00:04:59.667 00:04:59.667 Elapsed time = 0.649 seconds 00:04:59.668 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.668 EAL: request: mp_malloc_sync 00:04:59.668 EAL: No shared files mode enabled, IPC is disabled 00:04:59.668 EAL: Heap on socket 0 was shrunk by 2MB 00:04:59.668 EAL: No shared files mode enabled, IPC is disabled 00:04:59.668 EAL: No shared files mode enabled, IPC is disabled 00:04:59.668 EAL: No shared files mode enabled, IPC is disabled 00:04:59.668 00:04:59.668 real 0m0.780s 00:04:59.668 user 0m0.415s 00:04:59.668 sys 0m0.325s 00:04:59.668 07:57:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.668 07:57:30 -- common/autotest_common.sh@10 -- # set +x 00:04:59.668 ************************************ 00:04:59.668 END TEST env_vtophys 00:04:59.668 ************************************ 00:04:59.668 07:57:30 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:59.668 07:57:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:59.668 07:57:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:59.668 07:57:30 -- common/autotest_common.sh@10 -- # set +x 00:04:59.668 ************************************ 00:04:59.668 START TEST env_pci 00:04:59.668 ************************************ 00:04:59.668 07:57:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:59.668 00:04:59.668 00:04:59.668 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.668 http://cunit.sourceforge.net/ 00:04:59.668 00:04:59.668 00:04:59.668 Suite: pci 00:04:59.668 Test: pci_hook ...[2024-06-11 07:57:30.256883] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 830745 has claimed it 00:04:59.668 EAL: Cannot find device (10000:00:01.0) 00:04:59.668 EAL: Failed to attach device on primary process 00:04:59.668 passed 00:04:59.668 00:04:59.668 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.668 suites 1 1 n/a 0 0 00:04:59.668 tests 1 1 1 0 0 00:04:59.668 asserts 25 25 25 0 n/a 00:04:59.668 00:04:59.668 Elapsed time = 0.032 seconds 00:04:59.668 00:04:59.668 real 0m0.053s 00:04:59.668 user 0m0.014s 00:04:59.668 sys 0m0.038s 00:04:59.668 07:57:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.668 07:57:30 -- common/autotest_common.sh@10 -- # set +x 00:04:59.668 ************************************ 00:04:59.668 END TEST env_pci 00:04:59.668 ************************************ 00:04:59.928 07:57:30 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:59.928 07:57:30 -- env/env.sh@15 -- # uname 00:04:59.928 07:57:30 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:59.928 07:57:30 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:59.928 07:57:30 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:59.928 07:57:30 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:04:59.928 07:57:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:59.928 07:57:30 -- common/autotest_common.sh@10 -- # set +x 00:04:59.928 ************************************ 00:04:59.928 START TEST env_dpdk_post_init 00:04:59.928 ************************************ 00:04:59.928 07:57:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:59.928 EAL: Detected CPU lcores: 128 00:04:59.928 EAL: Detected NUMA nodes: 2 00:04:59.928 EAL: Detected shared linkage of DPDK 00:04:59.928 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:59.928 EAL: Selected IOVA mode 'VA' 00:04:59.928 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.928 EAL: VFIO support initialized 00:04:59.928 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:59.928 EAL: Using IOMMU type 1 (Type 1) 00:05:00.189 EAL: Ignore mapping IO port bar(1) 00:05:00.189 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:00.189 EAL: Ignore mapping IO port bar(1) 00:05:00.449 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:00.449 EAL: Ignore mapping IO port bar(1) 00:05:00.709 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:00.709 EAL: Ignore mapping IO port bar(1) 00:05:00.969 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:00.969 EAL: Ignore mapping IO port bar(1) 00:05:00.969 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:01.229 EAL: Ignore mapping IO port bar(1) 00:05:01.229 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:01.489 EAL: Ignore mapping IO port bar(1) 00:05:01.489 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:01.748 EAL: Ignore mapping IO port bar(1) 00:05:01.748 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:02.008 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:02.008 EAL: Ignore mapping IO port bar(1) 00:05:02.268 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:02.268 EAL: Ignore mapping IO port bar(1) 00:05:02.528 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:02.528 EAL: Ignore mapping IO port bar(1) 00:05:02.528 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:02.788 EAL: Ignore mapping IO port bar(1) 00:05:02.788 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:03.049 EAL: Ignore mapping IO port bar(1) 00:05:03.049 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:03.309 EAL: Ignore mapping IO port bar(1) 00:05:03.309 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:03.309 EAL: Ignore mapping IO port bar(1) 00:05:03.568 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:03.568 EAL: Ignore mapping IO port bar(1) 00:05:03.828 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:03.828 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:03.828 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:03.828 Starting DPDK initialization... 00:05:03.828 Starting SPDK post initialization... 00:05:03.828 SPDK NVMe probe 00:05:03.828 Attaching to 0000:65:00.0 00:05:03.828 Attached to 0000:65:00.0 00:05:03.828 Cleaning up... 00:05:05.737 00:05:05.737 real 0m5.723s 00:05:05.737 user 0m0.186s 00:05:05.737 sys 0m0.083s 00:05:05.737 07:57:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.737 07:57:36 -- common/autotest_common.sh@10 -- # set +x 00:05:05.737 ************************************ 00:05:05.737 END TEST env_dpdk_post_init 00:05:05.737 ************************************ 00:05:05.737 07:57:36 -- env/env.sh@26 -- # uname 00:05:05.737 07:57:36 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:05.737 07:57:36 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:05.737 07:57:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:05.737 07:57:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:05.737 07:57:36 -- common/autotest_common.sh@10 -- # set +x 00:05:05.737 ************************************ 00:05:05.737 START TEST env_mem_callbacks 00:05:05.737 ************************************ 00:05:05.737 07:57:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:05.737 EAL: Detected CPU lcores: 128 00:05:05.737 EAL: Detected NUMA nodes: 2 00:05:05.737 EAL: Detected shared linkage of DPDK 00:05:05.737 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:05.737 EAL: Selected IOVA mode 'VA' 00:05:05.737 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.737 EAL: VFIO support initialized 00:05:05.737 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:05.737 00:05:05.737 00:05:05.737 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.737 http://cunit.sourceforge.net/ 00:05:05.737 00:05:05.737 00:05:05.737 Suite: memory 00:05:05.737 Test: test ... 00:05:05.737 register 0x200000200000 2097152 00:05:05.737 malloc 3145728 00:05:05.737 register 0x200000400000 4194304 00:05:05.737 buf 0x200000500000 len 3145728 PASSED 00:05:05.737 malloc 64 00:05:05.737 buf 0x2000004fff40 len 64 PASSED 00:05:05.737 malloc 4194304 00:05:05.737 register 0x200000800000 6291456 00:05:05.737 buf 0x200000a00000 len 4194304 PASSED 00:05:05.737 free 0x200000500000 3145728 00:05:05.737 free 0x2000004fff40 64 00:05:05.737 unregister 0x200000400000 4194304 PASSED 00:05:05.737 free 0x200000a00000 4194304 00:05:05.737 unregister 0x200000800000 6291456 PASSED 00:05:05.737 malloc 8388608 00:05:05.737 register 0x200000400000 10485760 00:05:05.737 buf 0x200000600000 len 8388608 PASSED 00:05:05.737 free 0x200000600000 8388608 00:05:05.737 unregister 0x200000400000 10485760 PASSED 00:05:05.737 passed 00:05:05.737 00:05:05.737 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.737 suites 1 1 n/a 0 0 00:05:05.737 tests 1 1 1 0 0 00:05:05.737 asserts 15 15 15 0 n/a 00:05:05.737 00:05:05.737 Elapsed time = 0.004 seconds 00:05:05.737 00:05:05.737 real 0m0.058s 00:05:05.737 user 0m0.019s 00:05:05.737 sys 0m0.038s 00:05:05.737 07:57:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.737 07:57:36 -- common/autotest_common.sh@10 -- # set +x 00:05:05.737 ************************************ 00:05:05.737 END TEST env_mem_callbacks 00:05:05.737 ************************************ 00:05:05.737 00:05:05.737 real 0m7.140s 00:05:05.737 user 0m0.948s 00:05:05.737 sys 0m0.737s 00:05:05.737 07:57:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.737 07:57:36 -- common/autotest_common.sh@10 -- # set +x 00:05:05.737 ************************************ 00:05:05.737 END TEST env 00:05:05.737 ************************************ 00:05:05.737 07:57:36 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:05.737 07:57:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:05.737 07:57:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:05.737 07:57:36 -- common/autotest_common.sh@10 -- # set +x 00:05:05.737 ************************************ 00:05:05.737 START TEST rpc 00:05:05.737 ************************************ 00:05:05.737 07:57:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:05.737 * Looking for test storage... 00:05:05.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:05.737 07:57:36 -- rpc/rpc.sh@65 -- # spdk_pid=831903 00:05:05.737 07:57:36 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.737 07:57:36 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:05.737 07:57:36 -- rpc/rpc.sh@67 -- # waitforlisten 831903 00:05:05.737 07:57:36 -- common/autotest_common.sh@819 -- # '[' -z 831903 ']' 00:05:05.737 07:57:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.738 07:57:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:05.738 07:57:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.738 07:57:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:05.738 07:57:36 -- common/autotest_common.sh@10 -- # set +x 00:05:05.998 [2024-06-11 07:57:36.390056] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:05.998 [2024-06-11 07:57:36.390106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid831903 ] 00:05:05.998 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.998 [2024-06-11 07:57:36.451580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.998 [2024-06-11 07:57:36.513904] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:05.998 [2024-06-11 07:57:36.514028] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:05.998 [2024-06-11 07:57:36.514038] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 831903' to capture a snapshot of events at runtime. 00:05:05.998 [2024-06-11 07:57:36.514045] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid831903 for offline analysis/debug. 00:05:05.998 [2024-06-11 07:57:36.514072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.569 07:57:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:06.569 07:57:37 -- common/autotest_common.sh@852 -- # return 0 00:05:06.569 07:57:37 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:06.569 07:57:37 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:06.569 07:57:37 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:06.569 07:57:37 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:06.569 07:57:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:06.569 07:57:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.569 07:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:06.569 ************************************ 00:05:06.569 START TEST rpc_integrity 00:05:06.569 ************************************ 00:05:06.569 07:57:37 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:06.569 07:57:37 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:06.569 07:57:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:06.569 07:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:06.569 07:57:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:06.569 07:57:37 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:06.569 07:57:37 -- rpc/rpc.sh@13 -- # jq length 00:05:06.829 07:57:37 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:06.829 07:57:37 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:06.829 07:57:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:06.829 07:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:06.829 07:57:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:06.829 07:57:37 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:06.829 07:57:37 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:06.829 07:57:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:06.829 07:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:06.829 07:57:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:06.829 07:57:37 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:06.829 { 00:05:06.829 "name": "Malloc0", 00:05:06.829 "aliases": [ 00:05:06.829 "66b313cd-3745-4b15-82e3-85e28722b939" 00:05:06.829 ], 00:05:06.829 "product_name": "Malloc disk", 00:05:06.829 "block_size": 512, 00:05:06.829 "num_blocks": 16384, 00:05:06.829 "uuid": "66b313cd-3745-4b15-82e3-85e28722b939", 00:05:06.829 "assigned_rate_limits": { 00:05:06.829 "rw_ios_per_sec": 0, 00:05:06.829 "rw_mbytes_per_sec": 0, 00:05:06.829 "r_mbytes_per_sec": 0, 00:05:06.829 "w_mbytes_per_sec": 0 00:05:06.829 }, 00:05:06.829 "claimed": false, 00:05:06.829 "zoned": false, 00:05:06.829 "supported_io_types": { 00:05:06.829 "read": true, 00:05:06.829 "write": true, 00:05:06.829 "unmap": true, 00:05:06.829 "write_zeroes": true, 00:05:06.829 "flush": true, 00:05:06.829 "reset": true, 00:05:06.829 "compare": false, 00:05:06.829 "compare_and_write": false, 00:05:06.829 "abort": true, 00:05:06.829 "nvme_admin": false, 00:05:06.829 "nvme_io": false 00:05:06.829 }, 00:05:06.829 "memory_domains": [ 00:05:06.829 { 00:05:06.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.829 "dma_device_type": 2 00:05:06.829 } 00:05:06.829 ], 00:05:06.829 "driver_specific": {} 00:05:06.829 } 00:05:06.829 ]' 00:05:06.829 07:57:37 -- rpc/rpc.sh@17 -- # jq length 00:05:06.829 07:57:37 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:06.829 07:57:37 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:06.829 07:57:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:06.829 07:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:06.829 [2024-06-11 07:57:37.294779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:06.829 [2024-06-11 07:57:37.294814] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:06.829 [2024-06-11 07:57:37.294826] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fc1d00 00:05:06.829 [2024-06-11 07:57:37.294833] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:06.829 [2024-06-11 07:57:37.296191] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:06.829 [2024-06-11 07:57:37.296211] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:06.829 Passthru0 00:05:06.829 07:57:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:06.829 07:57:37 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:06.829 07:57:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:06.829 07:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:06.829 07:57:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:06.829 07:57:37 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:06.829 { 00:05:06.829 "name": "Malloc0", 00:05:06.829 "aliases": [ 00:05:06.829 "66b313cd-3745-4b15-82e3-85e28722b939" 00:05:06.829 ], 00:05:06.829 "product_name": "Malloc disk", 00:05:06.829 "block_size": 512, 00:05:06.829 "num_blocks": 16384, 00:05:06.829 "uuid": "66b313cd-3745-4b15-82e3-85e28722b939", 00:05:06.829 "assigned_rate_limits": { 00:05:06.829 "rw_ios_per_sec": 0, 00:05:06.829 "rw_mbytes_per_sec": 0, 00:05:06.829 "r_mbytes_per_sec": 0, 00:05:06.829 "w_mbytes_per_sec": 0 00:05:06.829 }, 00:05:06.829 "claimed": true, 00:05:06.829 "claim_type": "exclusive_write", 00:05:06.829 "zoned": false, 00:05:06.829 "supported_io_types": { 00:05:06.829 "read": true, 00:05:06.829 "write": true, 00:05:06.829 "unmap": true, 00:05:06.829 "write_zeroes": true, 00:05:06.829 "flush": true, 00:05:06.829 "reset": true, 00:05:06.829 "compare": false, 00:05:06.829 "compare_and_write": false, 00:05:06.829 "abort": true, 00:05:06.829 "nvme_admin": false, 00:05:06.829 "nvme_io": false 00:05:06.829 }, 00:05:06.829 "memory_domains": [ 00:05:06.829 { 00:05:06.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.829 "dma_device_type": 2 00:05:06.829 } 00:05:06.829 ], 00:05:06.829 "driver_specific": {} 00:05:06.829 }, 00:05:06.829 { 00:05:06.829 "name": "Passthru0", 00:05:06.829 "aliases": [ 00:05:06.829 "d56984e9-1f41-5eeb-82e0-e59eea9bb684" 00:05:06.829 ], 00:05:06.829 "product_name": "passthru", 00:05:06.829 "block_size": 512, 00:05:06.829 "num_blocks": 16384, 00:05:06.829 "uuid": "d56984e9-1f41-5eeb-82e0-e59eea9bb684", 00:05:06.829 "assigned_rate_limits": { 00:05:06.830 "rw_ios_per_sec": 0, 00:05:06.830 "rw_mbytes_per_sec": 0, 00:05:06.830 "r_mbytes_per_sec": 0, 00:05:06.830 "w_mbytes_per_sec": 0 00:05:06.830 }, 00:05:06.830 "claimed": false, 00:05:06.830 "zoned": false, 00:05:06.830 "supported_io_types": { 00:05:06.830 "read": true, 00:05:06.830 "write": true, 00:05:06.830 "unmap": true, 00:05:06.830 "write_zeroes": true, 00:05:06.830 "flush": true, 00:05:06.830 "reset": true, 00:05:06.830 "compare": false, 00:05:06.830 "compare_and_write": false, 00:05:06.830 "abort": true, 00:05:06.830 "nvme_admin": false, 00:05:06.830 "nvme_io": false 00:05:06.830 }, 00:05:06.830 "memory_domains": [ 00:05:06.830 { 00:05:06.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.830 "dma_device_type": 2 00:05:06.830 } 00:05:06.830 ], 00:05:06.830 "driver_specific": { 00:05:06.830 "passthru": { 00:05:06.830 "name": "Passthru0", 00:05:06.830 "base_bdev_name": "Malloc0" 00:05:06.830 } 00:05:06.830 } 00:05:06.830 } 00:05:06.830 ]' 00:05:06.830 07:57:37 -- rpc/rpc.sh@21 -- # jq length 00:05:06.830 07:57:37 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:06.830 07:57:37 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:06.830 07:57:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:06.830 07:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:06.830 07:57:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:06.830 07:57:37 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:06.830 07:57:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:06.830 07:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:06.830 07:57:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:06.830 07:57:37 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:06.830 07:57:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:06.830 07:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:06.830 07:57:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:06.830 07:57:37 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:06.830 07:57:37 -- rpc/rpc.sh@26 -- # jq length 00:05:06.830 07:57:37 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:06.830 00:05:06.830 real 0m0.283s 00:05:06.830 user 0m0.184s 00:05:06.830 sys 0m0.027s 00:05:06.830 07:57:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.830 07:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:06.830 ************************************ 00:05:06.830 END TEST rpc_integrity 00:05:06.830 ************************************ 00:05:07.090 07:57:37 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:07.090 07:57:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:07.090 07:57:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.090 07:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.090 ************************************ 00:05:07.090 START TEST rpc_plugins 00:05:07.090 ************************************ 00:05:07.090 07:57:37 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:07.090 07:57:37 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:07.090 07:57:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:07.090 07:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.090 07:57:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:07.090 07:57:37 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:07.090 07:57:37 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:07.090 07:57:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:07.090 07:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.090 07:57:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:07.090 07:57:37 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:07.090 { 00:05:07.090 "name": "Malloc1", 00:05:07.090 "aliases": [ 00:05:07.090 "01a148c0-be21-47ce-93cf-2c792ddc5679" 00:05:07.090 ], 00:05:07.090 "product_name": "Malloc disk", 00:05:07.090 "block_size": 4096, 00:05:07.090 "num_blocks": 256, 00:05:07.090 "uuid": "01a148c0-be21-47ce-93cf-2c792ddc5679", 00:05:07.090 "assigned_rate_limits": { 00:05:07.090 "rw_ios_per_sec": 0, 00:05:07.090 "rw_mbytes_per_sec": 0, 00:05:07.090 "r_mbytes_per_sec": 0, 00:05:07.090 "w_mbytes_per_sec": 0 00:05:07.090 }, 00:05:07.090 "claimed": false, 00:05:07.090 "zoned": false, 00:05:07.090 "supported_io_types": { 00:05:07.090 "read": true, 00:05:07.090 "write": true, 00:05:07.090 "unmap": true, 00:05:07.090 "write_zeroes": true, 00:05:07.090 "flush": true, 00:05:07.090 "reset": true, 00:05:07.090 "compare": false, 00:05:07.090 "compare_and_write": false, 00:05:07.090 "abort": true, 00:05:07.090 "nvme_admin": false, 00:05:07.090 "nvme_io": false 00:05:07.090 }, 00:05:07.090 "memory_domains": [ 00:05:07.090 { 00:05:07.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.090 "dma_device_type": 2 00:05:07.090 } 00:05:07.090 ], 00:05:07.090 "driver_specific": {} 00:05:07.090 } 00:05:07.090 ]' 00:05:07.090 07:57:37 -- rpc/rpc.sh@32 -- # jq length 00:05:07.090 07:57:37 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:07.090 07:57:37 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:07.090 07:57:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:07.090 07:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.090 07:57:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:07.090 07:57:37 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:07.090 07:57:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:07.090 07:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.090 07:57:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:07.090 07:57:37 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:07.090 07:57:37 -- rpc/rpc.sh@36 -- # jq length 00:05:07.090 07:57:37 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:07.090 00:05:07.090 real 0m0.152s 00:05:07.090 user 0m0.095s 00:05:07.090 sys 0m0.018s 00:05:07.090 07:57:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.090 07:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.090 ************************************ 00:05:07.090 END TEST rpc_plugins 00:05:07.090 ************************************ 00:05:07.090 07:57:37 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:07.090 07:57:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:07.090 07:57:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.090 07:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.090 ************************************ 00:05:07.090 START TEST rpc_trace_cmd_test 00:05:07.090 ************************************ 00:05:07.090 07:57:37 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:07.090 07:57:37 -- rpc/rpc.sh@40 -- # local info 00:05:07.090 07:57:37 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:07.090 07:57:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:07.090 07:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.090 07:57:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:07.090 07:57:37 -- rpc/rpc.sh@42 -- # info='{ 00:05:07.090 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid831903", 00:05:07.090 "tpoint_group_mask": "0x8", 00:05:07.090 "iscsi_conn": { 00:05:07.090 "mask": "0x2", 00:05:07.090 "tpoint_mask": "0x0" 00:05:07.090 }, 00:05:07.090 "scsi": { 00:05:07.090 "mask": "0x4", 00:05:07.090 "tpoint_mask": "0x0" 00:05:07.090 }, 00:05:07.090 "bdev": { 00:05:07.090 "mask": "0x8", 00:05:07.090 "tpoint_mask": "0xffffffffffffffff" 00:05:07.090 }, 00:05:07.090 "nvmf_rdma": { 00:05:07.090 "mask": "0x10", 00:05:07.090 "tpoint_mask": "0x0" 00:05:07.090 }, 00:05:07.090 "nvmf_tcp": { 00:05:07.090 "mask": "0x20", 00:05:07.090 "tpoint_mask": "0x0" 00:05:07.090 }, 00:05:07.090 "ftl": { 00:05:07.090 "mask": "0x40", 00:05:07.090 "tpoint_mask": "0x0" 00:05:07.090 }, 00:05:07.090 "blobfs": { 00:05:07.090 "mask": "0x80", 00:05:07.090 "tpoint_mask": "0x0" 00:05:07.090 }, 00:05:07.090 "dsa": { 00:05:07.090 "mask": "0x200", 00:05:07.090 "tpoint_mask": "0x0" 00:05:07.090 }, 00:05:07.090 "thread": { 00:05:07.090 "mask": "0x400", 00:05:07.090 "tpoint_mask": "0x0" 00:05:07.090 }, 00:05:07.090 "nvme_pcie": { 00:05:07.090 "mask": "0x800", 00:05:07.090 "tpoint_mask": "0x0" 00:05:07.090 }, 00:05:07.090 "iaa": { 00:05:07.090 "mask": "0x1000", 00:05:07.090 "tpoint_mask": "0x0" 00:05:07.090 }, 00:05:07.090 "nvme_tcp": { 00:05:07.090 "mask": "0x2000", 00:05:07.090 "tpoint_mask": "0x0" 00:05:07.090 }, 00:05:07.090 "bdev_nvme": { 00:05:07.090 "mask": "0x4000", 00:05:07.090 "tpoint_mask": "0x0" 00:05:07.090 } 00:05:07.090 }' 00:05:07.090 07:57:37 -- rpc/rpc.sh@43 -- # jq length 00:05:07.350 07:57:37 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:07.350 07:57:37 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:07.350 07:57:37 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:07.350 07:57:37 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:07.350 07:57:37 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:07.350 07:57:37 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:07.350 07:57:37 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:07.350 07:57:37 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:07.350 07:57:37 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:07.350 00:05:07.350 real 0m0.241s 00:05:07.350 user 0m0.203s 00:05:07.350 sys 0m0.028s 00:05:07.350 07:57:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.350 07:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.350 ************************************ 00:05:07.350 END TEST rpc_trace_cmd_test 00:05:07.350 ************************************ 00:05:07.350 07:57:37 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:07.350 07:57:37 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:07.350 07:57:37 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:07.350 07:57:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:07.350 07:57:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.350 07:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.350 ************************************ 00:05:07.350 START TEST rpc_daemon_integrity 00:05:07.350 ************************************ 00:05:07.350 07:57:37 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:07.350 07:57:37 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:07.350 07:57:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:07.350 07:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.350 07:57:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:07.350 07:57:37 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:07.350 07:57:37 -- rpc/rpc.sh@13 -- # jq length 00:05:07.610 07:57:38 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:07.610 07:57:38 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:07.610 07:57:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:07.610 07:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:07.610 07:57:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:07.610 07:57:38 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:07.610 07:57:38 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:07.610 07:57:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:07.610 07:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:07.610 07:57:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:07.610 07:57:38 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:07.610 { 00:05:07.610 "name": "Malloc2", 00:05:07.610 "aliases": [ 00:05:07.610 "11cffe93-f447-4f6c-9c7b-69390fed4fff" 00:05:07.610 ], 00:05:07.610 "product_name": "Malloc disk", 00:05:07.610 "block_size": 512, 00:05:07.610 "num_blocks": 16384, 00:05:07.610 "uuid": "11cffe93-f447-4f6c-9c7b-69390fed4fff", 00:05:07.610 "assigned_rate_limits": { 00:05:07.610 "rw_ios_per_sec": 0, 00:05:07.610 "rw_mbytes_per_sec": 0, 00:05:07.610 "r_mbytes_per_sec": 0, 00:05:07.610 "w_mbytes_per_sec": 0 00:05:07.610 }, 00:05:07.610 "claimed": false, 00:05:07.610 "zoned": false, 00:05:07.610 "supported_io_types": { 00:05:07.610 "read": true, 00:05:07.610 "write": true, 00:05:07.610 "unmap": true, 00:05:07.610 "write_zeroes": true, 00:05:07.610 "flush": true, 00:05:07.610 "reset": true, 00:05:07.610 "compare": false, 00:05:07.610 "compare_and_write": false, 00:05:07.610 "abort": true, 00:05:07.610 "nvme_admin": false, 00:05:07.610 "nvme_io": false 00:05:07.610 }, 00:05:07.610 "memory_domains": [ 00:05:07.610 { 00:05:07.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.610 "dma_device_type": 2 00:05:07.610 } 00:05:07.610 ], 00:05:07.610 "driver_specific": {} 00:05:07.610 } 00:05:07.610 ]' 00:05:07.610 07:57:38 -- rpc/rpc.sh@17 -- # jq length 00:05:07.610 07:57:38 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:07.610 07:57:38 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:07.610 07:57:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:07.610 07:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:07.610 [2024-06-11 07:57:38.092943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:07.610 [2024-06-11 07:57:38.092977] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:07.610 [2024-06-11 07:57:38.092991] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x216f4e0 00:05:07.611 [2024-06-11 07:57:38.092998] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:07.611 [2024-06-11 07:57:38.094215] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:07.611 [2024-06-11 07:57:38.094235] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:07.611 Passthru0 00:05:07.611 07:57:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:07.611 07:57:38 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:07.611 07:57:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:07.611 07:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:07.611 07:57:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:07.611 07:57:38 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:07.611 { 00:05:07.611 "name": "Malloc2", 00:05:07.611 "aliases": [ 00:05:07.611 "11cffe93-f447-4f6c-9c7b-69390fed4fff" 00:05:07.611 ], 00:05:07.611 "product_name": "Malloc disk", 00:05:07.611 "block_size": 512, 00:05:07.611 "num_blocks": 16384, 00:05:07.611 "uuid": "11cffe93-f447-4f6c-9c7b-69390fed4fff", 00:05:07.611 "assigned_rate_limits": { 00:05:07.611 "rw_ios_per_sec": 0, 00:05:07.611 "rw_mbytes_per_sec": 0, 00:05:07.611 "r_mbytes_per_sec": 0, 00:05:07.611 "w_mbytes_per_sec": 0 00:05:07.611 }, 00:05:07.611 "claimed": true, 00:05:07.611 "claim_type": "exclusive_write", 00:05:07.611 "zoned": false, 00:05:07.611 "supported_io_types": { 00:05:07.611 "read": true, 00:05:07.611 "write": true, 00:05:07.611 "unmap": true, 00:05:07.611 "write_zeroes": true, 00:05:07.611 "flush": true, 00:05:07.611 "reset": true, 00:05:07.611 "compare": false, 00:05:07.611 "compare_and_write": false, 00:05:07.611 "abort": true, 00:05:07.611 "nvme_admin": false, 00:05:07.611 "nvme_io": false 00:05:07.611 }, 00:05:07.611 "memory_domains": [ 00:05:07.611 { 00:05:07.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.611 "dma_device_type": 2 00:05:07.611 } 00:05:07.611 ], 00:05:07.611 "driver_specific": {} 00:05:07.611 }, 00:05:07.611 { 00:05:07.611 "name": "Passthru0", 00:05:07.611 "aliases": [ 00:05:07.611 "3e9ac74d-7bb1-514f-93bb-8f02108a0312" 00:05:07.611 ], 00:05:07.611 "product_name": "passthru", 00:05:07.611 "block_size": 512, 00:05:07.611 "num_blocks": 16384, 00:05:07.611 "uuid": "3e9ac74d-7bb1-514f-93bb-8f02108a0312", 00:05:07.611 "assigned_rate_limits": { 00:05:07.611 "rw_ios_per_sec": 0, 00:05:07.611 "rw_mbytes_per_sec": 0, 00:05:07.611 "r_mbytes_per_sec": 0, 00:05:07.611 "w_mbytes_per_sec": 0 00:05:07.611 }, 00:05:07.611 "claimed": false, 00:05:07.611 "zoned": false, 00:05:07.611 "supported_io_types": { 00:05:07.611 "read": true, 00:05:07.611 "write": true, 00:05:07.611 "unmap": true, 00:05:07.611 "write_zeroes": true, 00:05:07.611 "flush": true, 00:05:07.611 "reset": true, 00:05:07.611 "compare": false, 00:05:07.611 "compare_and_write": false, 00:05:07.611 "abort": true, 00:05:07.611 "nvme_admin": false, 00:05:07.611 "nvme_io": false 00:05:07.611 }, 00:05:07.611 "memory_domains": [ 00:05:07.611 { 00:05:07.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.611 "dma_device_type": 2 00:05:07.611 } 00:05:07.611 ], 00:05:07.611 "driver_specific": { 00:05:07.611 "passthru": { 00:05:07.611 "name": "Passthru0", 00:05:07.611 "base_bdev_name": "Malloc2" 00:05:07.611 } 00:05:07.611 } 00:05:07.611 } 00:05:07.611 ]' 00:05:07.611 07:57:38 -- rpc/rpc.sh@21 -- # jq length 00:05:07.611 07:57:38 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:07.611 07:57:38 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:07.611 07:57:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:07.611 07:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:07.611 07:57:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:07.611 07:57:38 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:07.611 07:57:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:07.611 07:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:07.611 07:57:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:07.611 07:57:38 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:07.611 07:57:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:07.611 07:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:07.611 07:57:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:07.611 07:57:38 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:07.611 07:57:38 -- rpc/rpc.sh@26 -- # jq length 00:05:07.611 07:57:38 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:07.611 00:05:07.611 real 0m0.278s 00:05:07.611 user 0m0.177s 00:05:07.611 sys 0m0.037s 00:05:07.611 07:57:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.611 07:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:07.611 ************************************ 00:05:07.611 END TEST rpc_daemon_integrity 00:05:07.611 ************************************ 00:05:07.870 07:57:38 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:07.870 07:57:38 -- rpc/rpc.sh@84 -- # killprocess 831903 00:05:07.870 07:57:38 -- common/autotest_common.sh@926 -- # '[' -z 831903 ']' 00:05:07.870 07:57:38 -- common/autotest_common.sh@930 -- # kill -0 831903 00:05:07.870 07:57:38 -- common/autotest_common.sh@931 -- # uname 00:05:07.870 07:57:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:07.870 07:57:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 831903 00:05:07.870 07:57:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:07.870 07:57:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:07.870 07:57:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 831903' 00:05:07.870 killing process with pid 831903 00:05:07.870 07:57:38 -- common/autotest_common.sh@945 -- # kill 831903 00:05:07.870 07:57:38 -- common/autotest_common.sh@950 -- # wait 831903 00:05:08.130 00:05:08.130 real 0m2.297s 00:05:08.130 user 0m3.031s 00:05:08.130 sys 0m0.569s 00:05:08.130 07:57:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.130 07:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:08.130 ************************************ 00:05:08.130 END TEST rpc 00:05:08.130 ************************************ 00:05:08.130 07:57:38 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:08.130 07:57:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:08.130 07:57:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:08.130 07:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:08.130 ************************************ 00:05:08.130 START TEST rpc_client 00:05:08.130 ************************************ 00:05:08.130 07:57:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:08.130 * Looking for test storage... 00:05:08.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:08.130 07:57:38 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:08.130 OK 00:05:08.130 07:57:38 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:08.130 00:05:08.130 real 0m0.117s 00:05:08.130 user 0m0.057s 00:05:08.130 sys 0m0.068s 00:05:08.130 07:57:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.130 07:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:08.130 ************************************ 00:05:08.130 END TEST rpc_client 00:05:08.130 ************************************ 00:05:08.130 07:57:38 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:08.130 07:57:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:08.130 07:57:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:08.130 07:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:08.130 ************************************ 00:05:08.130 START TEST json_config 00:05:08.130 ************************************ 00:05:08.130 07:57:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:08.391 07:57:38 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:08.391 07:57:38 -- nvmf/common.sh@7 -- # uname -s 00:05:08.391 07:57:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:08.391 07:57:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:08.391 07:57:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:08.391 07:57:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:08.391 07:57:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:08.391 07:57:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:08.391 07:57:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:08.391 07:57:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:08.391 07:57:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:08.391 07:57:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:08.391 07:57:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:08.391 07:57:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:08.391 07:57:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:08.391 07:57:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:08.391 07:57:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:08.391 07:57:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:08.391 07:57:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:08.391 07:57:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:08.391 07:57:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:08.391 07:57:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.391 07:57:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.391 07:57:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.391 07:57:38 -- paths/export.sh@5 -- # export PATH 00:05:08.391 07:57:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.391 07:57:38 -- nvmf/common.sh@46 -- # : 0 00:05:08.391 07:57:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:08.391 07:57:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:08.391 07:57:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:08.391 07:57:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:08.391 07:57:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:08.391 07:57:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:08.391 07:57:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:08.391 07:57:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:08.391 07:57:38 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:08.391 07:57:38 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:08.391 07:57:38 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:08.391 07:57:38 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:08.391 07:57:38 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:08.391 07:57:38 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:08.391 07:57:38 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:08.391 07:57:38 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:08.391 07:57:38 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:08.391 07:57:38 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:08.391 07:57:38 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:08.391 07:57:38 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:08.391 07:57:38 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:08.391 07:57:38 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:08.391 07:57:38 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:08.391 INFO: JSON configuration test init 00:05:08.391 07:57:38 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:08.391 07:57:38 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:08.391 07:57:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:08.391 07:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:08.391 07:57:38 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:08.391 07:57:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:08.391 07:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:08.391 07:57:38 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:08.391 07:57:38 -- json_config/json_config.sh@98 -- # local app=target 00:05:08.391 07:57:38 -- json_config/json_config.sh@99 -- # shift 00:05:08.391 07:57:38 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:08.391 07:57:38 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:08.391 07:57:38 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:08.391 07:57:38 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:08.391 07:57:38 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:08.392 07:57:38 -- json_config/json_config.sh@111 -- # app_pid[$app]=832749 00:05:08.392 07:57:38 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:08.392 Waiting for target to run... 00:05:08.392 07:57:38 -- json_config/json_config.sh@114 -- # waitforlisten 832749 /var/tmp/spdk_tgt.sock 00:05:08.392 07:57:38 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:08.392 07:57:38 -- common/autotest_common.sh@819 -- # '[' -z 832749 ']' 00:05:08.392 07:57:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:08.392 07:57:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:08.392 07:57:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:08.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:08.392 07:57:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:08.392 07:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:08.392 [2024-06-11 07:57:38.907247] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:08.392 [2024-06-11 07:57:38.907318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid832749 ] 00:05:08.392 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.651 [2024-06-11 07:57:39.207248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.651 [2024-06-11 07:57:39.263175] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:08.651 [2024-06-11 07:57:39.263303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.221 07:57:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:09.221 07:57:39 -- common/autotest_common.sh@852 -- # return 0 00:05:09.221 07:57:39 -- json_config/json_config.sh@115 -- # echo '' 00:05:09.221 00:05:09.221 07:57:39 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:09.221 07:57:39 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:09.221 07:57:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:09.221 07:57:39 -- common/autotest_common.sh@10 -- # set +x 00:05:09.221 07:57:39 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:09.221 07:57:39 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:09.221 07:57:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:09.221 07:57:39 -- common/autotest_common.sh@10 -- # set +x 00:05:09.221 07:57:39 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:09.221 07:57:39 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:09.221 07:57:39 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:09.791 07:57:40 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:09.791 07:57:40 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:09.791 07:57:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:09.791 07:57:40 -- common/autotest_common.sh@10 -- # set +x 00:05:09.791 07:57:40 -- json_config/json_config.sh@48 -- # local ret=0 00:05:09.791 07:57:40 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:09.791 07:57:40 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:09.791 07:57:40 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:09.791 07:57:40 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:09.791 07:57:40 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:09.791 07:57:40 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:09.791 07:57:40 -- json_config/json_config.sh@51 -- # local get_types 00:05:09.791 07:57:40 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:09.791 07:57:40 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:09.791 07:57:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:09.791 07:57:40 -- common/autotest_common.sh@10 -- # set +x 00:05:09.791 07:57:40 -- json_config/json_config.sh@58 -- # return 0 00:05:09.791 07:57:40 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:09.791 07:57:40 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:09.791 07:57:40 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:09.791 07:57:40 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:09.791 07:57:40 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:09.791 07:57:40 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:09.791 07:57:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:09.791 07:57:40 -- common/autotest_common.sh@10 -- # set +x 00:05:09.791 07:57:40 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:09.791 07:57:40 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:09.791 07:57:40 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:09.791 07:57:40 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:09.791 07:57:40 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:10.050 MallocForNvmf0 00:05:10.050 07:57:40 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:10.050 07:57:40 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:10.310 MallocForNvmf1 00:05:10.310 07:57:40 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:10.310 07:57:40 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:10.310 [2024-06-11 07:57:40.886256] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:10.310 07:57:40 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:10.310 07:57:40 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:10.570 07:57:41 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:10.570 07:57:41 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:10.829 07:57:41 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:10.829 07:57:41 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:10.829 07:57:41 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:10.829 07:57:41 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:11.089 [2024-06-11 07:57:41.520293] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:11.089 07:57:41 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:11.089 07:57:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:11.089 07:57:41 -- common/autotest_common.sh@10 -- # set +x 00:05:11.089 07:57:41 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:11.089 07:57:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:11.089 07:57:41 -- common/autotest_common.sh@10 -- # set +x 00:05:11.089 07:57:41 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:11.089 07:57:41 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:11.089 07:57:41 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:11.348 MallocBdevForConfigChangeCheck 00:05:11.348 07:57:41 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:11.348 07:57:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:11.348 07:57:41 -- common/autotest_common.sh@10 -- # set +x 00:05:11.348 07:57:41 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:11.348 07:57:41 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.618 07:57:42 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:11.618 INFO: shutting down applications... 00:05:11.618 07:57:42 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:11.618 07:57:42 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:11.618 07:57:42 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:11.618 07:57:42 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:11.880 Calling clear_iscsi_subsystem 00:05:11.880 Calling clear_nvmf_subsystem 00:05:11.880 Calling clear_nbd_subsystem 00:05:11.880 Calling clear_ublk_subsystem 00:05:11.880 Calling clear_vhost_blk_subsystem 00:05:11.880 Calling clear_vhost_scsi_subsystem 00:05:11.880 Calling clear_scheduler_subsystem 00:05:11.880 Calling clear_bdev_subsystem 00:05:11.880 Calling clear_accel_subsystem 00:05:11.880 Calling clear_vmd_subsystem 00:05:11.880 Calling clear_sock_subsystem 00:05:11.880 Calling clear_iobuf_subsystem 00:05:11.880 07:57:42 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:11.880 07:57:42 -- json_config/json_config.sh@396 -- # count=100 00:05:11.880 07:57:42 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:11.880 07:57:42 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.880 07:57:42 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:11.880 07:57:42 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:12.450 07:57:42 -- json_config/json_config.sh@398 -- # break 00:05:12.450 07:57:42 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:12.450 07:57:42 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:12.450 07:57:42 -- json_config/json_config.sh@120 -- # local app=target 00:05:12.450 07:57:42 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:12.450 07:57:42 -- json_config/json_config.sh@124 -- # [[ -n 832749 ]] 00:05:12.450 07:57:42 -- json_config/json_config.sh@127 -- # kill -SIGINT 832749 00:05:12.450 07:57:42 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:12.450 07:57:42 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:12.450 07:57:42 -- json_config/json_config.sh@130 -- # kill -0 832749 00:05:12.450 07:57:42 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:12.710 07:57:43 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:12.710 07:57:43 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:12.710 07:57:43 -- json_config/json_config.sh@130 -- # kill -0 832749 00:05:12.710 07:57:43 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:12.710 07:57:43 -- json_config/json_config.sh@132 -- # break 00:05:12.710 07:57:43 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:12.710 07:57:43 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:12.710 SPDK target shutdown done 00:05:12.710 07:57:43 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:12.710 INFO: relaunching applications... 00:05:12.710 07:57:43 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.710 07:57:43 -- json_config/json_config.sh@98 -- # local app=target 00:05:12.710 07:57:43 -- json_config/json_config.sh@99 -- # shift 00:05:12.710 07:57:43 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:12.710 07:57:43 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:12.710 07:57:43 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:12.711 07:57:43 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:12.711 07:57:43 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:12.711 07:57:43 -- json_config/json_config.sh@111 -- # app_pid[$app]=833751 00:05:12.711 07:57:43 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:12.711 Waiting for target to run... 00:05:12.711 07:57:43 -- json_config/json_config.sh@114 -- # waitforlisten 833751 /var/tmp/spdk_tgt.sock 00:05:12.711 07:57:43 -- common/autotest_common.sh@819 -- # '[' -z 833751 ']' 00:05:12.711 07:57:43 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.711 07:57:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:12.711 07:57:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:12.711 07:57:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:12.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:12.711 07:57:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:12.711 07:57:43 -- common/autotest_common.sh@10 -- # set +x 00:05:12.711 [2024-06-11 07:57:43.350034] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:12.711 [2024-06-11 07:57:43.350111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid833751 ] 00:05:12.970 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.230 [2024-06-11 07:57:43.620027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.230 [2024-06-11 07:57:43.669793] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:13.230 [2024-06-11 07:57:43.669916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.800 [2024-06-11 07:57:44.160366] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:13.800 [2024-06-11 07:57:44.192715] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:14.370 07:57:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:14.370 07:57:44 -- common/autotest_common.sh@852 -- # return 0 00:05:14.370 07:57:44 -- json_config/json_config.sh@115 -- # echo '' 00:05:14.370 00:05:14.370 07:57:44 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:14.370 07:57:44 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:14.370 INFO: Checking if target configuration is the same... 00:05:14.370 07:57:44 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:14.370 07:57:44 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.370 07:57:44 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.370 + '[' 2 -ne 2 ']' 00:05:14.370 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:14.370 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:14.370 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:14.370 +++ basename /dev/fd/62 00:05:14.370 ++ mktemp /tmp/62.XXX 00:05:14.370 + tmp_file_1=/tmp/62.SIe 00:05:14.370 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.370 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:14.370 + tmp_file_2=/tmp/spdk_tgt_config.json.Dgr 00:05:14.370 + ret=0 00:05:14.370 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.630 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.630 + diff -u /tmp/62.SIe /tmp/spdk_tgt_config.json.Dgr 00:05:14.630 + echo 'INFO: JSON config files are the same' 00:05:14.630 INFO: JSON config files are the same 00:05:14.630 + rm /tmp/62.SIe /tmp/spdk_tgt_config.json.Dgr 00:05:14.630 + exit 0 00:05:14.630 07:57:45 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:14.630 07:57:45 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:14.630 INFO: changing configuration and checking if this can be detected... 00:05:14.630 07:57:45 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:14.630 07:57:45 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:14.630 07:57:45 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:14.630 07:57:45 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.630 07:57:45 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.630 + '[' 2 -ne 2 ']' 00:05:14.630 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:14.630 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:14.630 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:14.630 +++ basename /dev/fd/62 00:05:14.630 ++ mktemp /tmp/62.XXX 00:05:14.630 + tmp_file_1=/tmp/62.w5j 00:05:14.630 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.630 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:14.630 + tmp_file_2=/tmp/spdk_tgt_config.json.hRk 00:05:14.630 + ret=0 00:05:14.630 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.890 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:15.151 + diff -u /tmp/62.w5j /tmp/spdk_tgt_config.json.hRk 00:05:15.151 + ret=1 00:05:15.151 + echo '=== Start of file: /tmp/62.w5j ===' 00:05:15.151 + cat /tmp/62.w5j 00:05:15.151 + echo '=== End of file: /tmp/62.w5j ===' 00:05:15.151 + echo '' 00:05:15.151 + echo '=== Start of file: /tmp/spdk_tgt_config.json.hRk ===' 00:05:15.151 + cat /tmp/spdk_tgt_config.json.hRk 00:05:15.151 + echo '=== End of file: /tmp/spdk_tgt_config.json.hRk ===' 00:05:15.151 + echo '' 00:05:15.151 + rm /tmp/62.w5j /tmp/spdk_tgt_config.json.hRk 00:05:15.151 + exit 1 00:05:15.151 07:57:45 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:15.151 INFO: configuration change detected. 00:05:15.151 07:57:45 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:15.151 07:57:45 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:15.151 07:57:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:15.151 07:57:45 -- common/autotest_common.sh@10 -- # set +x 00:05:15.151 07:57:45 -- json_config/json_config.sh@360 -- # local ret=0 00:05:15.151 07:57:45 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:15.151 07:57:45 -- json_config/json_config.sh@370 -- # [[ -n 833751 ]] 00:05:15.151 07:57:45 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:15.151 07:57:45 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:15.151 07:57:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:15.151 07:57:45 -- common/autotest_common.sh@10 -- # set +x 00:05:15.151 07:57:45 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:15.151 07:57:45 -- json_config/json_config.sh@246 -- # uname -s 00:05:15.151 07:57:45 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:15.151 07:57:45 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:15.151 07:57:45 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:15.151 07:57:45 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:15.151 07:57:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:15.151 07:57:45 -- common/autotest_common.sh@10 -- # set +x 00:05:15.151 07:57:45 -- json_config/json_config.sh@376 -- # killprocess 833751 00:05:15.151 07:57:45 -- common/autotest_common.sh@926 -- # '[' -z 833751 ']' 00:05:15.151 07:57:45 -- common/autotest_common.sh@930 -- # kill -0 833751 00:05:15.151 07:57:45 -- common/autotest_common.sh@931 -- # uname 00:05:15.151 07:57:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:15.151 07:57:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 833751 00:05:15.151 07:57:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:15.151 07:57:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:15.151 07:57:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 833751' 00:05:15.151 killing process with pid 833751 00:05:15.151 07:57:45 -- common/autotest_common.sh@945 -- # kill 833751 00:05:15.151 07:57:45 -- common/autotest_common.sh@950 -- # wait 833751 00:05:15.411 07:57:45 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.411 07:57:45 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:15.411 07:57:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:15.411 07:57:45 -- common/autotest_common.sh@10 -- # set +x 00:05:15.411 07:57:46 -- json_config/json_config.sh@381 -- # return 0 00:05:15.411 07:57:46 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:15.411 INFO: Success 00:05:15.411 00:05:15.411 real 0m7.276s 00:05:15.411 user 0m8.792s 00:05:15.411 sys 0m1.684s 00:05:15.411 07:57:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.411 07:57:46 -- common/autotest_common.sh@10 -- # set +x 00:05:15.411 ************************************ 00:05:15.411 END TEST json_config 00:05:15.411 ************************************ 00:05:15.411 07:57:46 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:15.411 07:57:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:15.411 07:57:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.411 07:57:46 -- common/autotest_common.sh@10 -- # set +x 00:05:15.672 ************************************ 00:05:15.672 START TEST json_config_extra_key 00:05:15.672 ************************************ 00:05:15.672 07:57:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:15.672 07:57:46 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:15.672 07:57:46 -- nvmf/common.sh@7 -- # uname -s 00:05:15.672 07:57:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:15.672 07:57:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:15.672 07:57:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:15.672 07:57:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:15.672 07:57:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:15.672 07:57:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:15.672 07:57:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:15.672 07:57:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:15.672 07:57:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:15.672 07:57:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:15.672 07:57:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:15.672 07:57:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:15.672 07:57:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:15.672 07:57:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:15.672 07:57:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:15.672 07:57:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:15.672 07:57:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:15.672 07:57:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:15.672 07:57:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:15.672 07:57:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.672 07:57:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.672 07:57:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.672 07:57:46 -- paths/export.sh@5 -- # export PATH 00:05:15.672 07:57:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.672 07:57:46 -- nvmf/common.sh@46 -- # : 0 00:05:15.672 07:57:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:15.672 07:57:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:15.672 07:57:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:15.672 07:57:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:15.672 07:57:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:15.672 07:57:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:15.672 07:57:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:15.672 07:57:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:15.672 07:57:46 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:15.672 07:57:46 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:15.672 07:57:46 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:15.673 07:57:46 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:15.673 07:57:46 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:15.673 07:57:46 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:15.673 07:57:46 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:15.673 07:57:46 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:15.673 07:57:46 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:15.673 07:57:46 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:15.673 INFO: launching applications... 00:05:15.673 07:57:46 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:15.673 07:57:46 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:15.673 07:57:46 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:15.673 07:57:46 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:15.673 07:57:46 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:15.673 07:57:46 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=834357 00:05:15.673 07:57:46 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:15.673 Waiting for target to run... 00:05:15.673 07:57:46 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 834357 /var/tmp/spdk_tgt.sock 00:05:15.673 07:57:46 -- common/autotest_common.sh@819 -- # '[' -z 834357 ']' 00:05:15.673 07:57:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:15.673 07:57:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:15.673 07:57:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:15.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:15.673 07:57:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:15.673 07:57:46 -- common/autotest_common.sh@10 -- # set +x 00:05:15.673 07:57:46 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:15.673 [2024-06-11 07:57:46.191819] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:15.673 [2024-06-11 07:57:46.191882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid834357 ] 00:05:15.673 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.933 [2024-06-11 07:57:46.438849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.933 [2024-06-11 07:57:46.487312] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:15.933 [2024-06-11 07:57:46.487446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.562 07:57:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:16.562 07:57:46 -- common/autotest_common.sh@852 -- # return 0 00:05:16.562 07:57:46 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:16.562 00:05:16.562 07:57:46 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:16.562 INFO: shutting down applications... 00:05:16.562 07:57:46 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:16.562 07:57:46 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:16.562 07:57:46 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:16.562 07:57:46 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 834357 ]] 00:05:16.562 07:57:46 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 834357 00:05:16.562 07:57:46 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:16.562 07:57:46 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:16.562 07:57:46 -- json_config/json_config_extra_key.sh@50 -- # kill -0 834357 00:05:16.562 07:57:46 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:16.847 07:57:47 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:16.847 07:57:47 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:16.847 07:57:47 -- json_config/json_config_extra_key.sh@50 -- # kill -0 834357 00:05:16.847 07:57:47 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:16.847 07:57:47 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:16.847 07:57:47 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:16.847 07:57:47 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:16.847 SPDK target shutdown done 00:05:16.847 07:57:47 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:16.847 Success 00:05:16.847 00:05:16.847 real 0m1.382s 00:05:16.847 user 0m1.050s 00:05:16.847 sys 0m0.314s 00:05:16.847 07:57:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.847 07:57:47 -- common/autotest_common.sh@10 -- # set +x 00:05:16.847 ************************************ 00:05:16.847 END TEST json_config_extra_key 00:05:16.847 ************************************ 00:05:16.847 07:57:47 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:16.847 07:57:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.847 07:57:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.847 07:57:47 -- common/autotest_common.sh@10 -- # set +x 00:05:17.136 ************************************ 00:05:17.136 START TEST alias_rpc 00:05:17.136 ************************************ 00:05:17.137 07:57:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:17.137 * Looking for test storage... 00:05:17.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:17.137 07:57:47 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:17.137 07:57:47 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=834744 00:05:17.137 07:57:47 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 834744 00:05:17.137 07:57:47 -- common/autotest_common.sh@819 -- # '[' -z 834744 ']' 00:05:17.137 07:57:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.137 07:57:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:17.137 07:57:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.137 07:57:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:17.137 07:57:47 -- common/autotest_common.sh@10 -- # set +x 00:05:17.137 07:57:47 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.137 [2024-06-11 07:57:47.625320] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:17.137 [2024-06-11 07:57:47.625379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid834744 ] 00:05:17.137 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.137 [2024-06-11 07:57:47.685937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.137 [2024-06-11 07:57:47.751230] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:17.137 [2024-06-11 07:57:47.751358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.793 07:57:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:17.793 07:57:48 -- common/autotest_common.sh@852 -- # return 0 00:05:17.793 07:57:48 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:18.054 07:57:48 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 834744 00:05:18.054 07:57:48 -- common/autotest_common.sh@926 -- # '[' -z 834744 ']' 00:05:18.054 07:57:48 -- common/autotest_common.sh@930 -- # kill -0 834744 00:05:18.054 07:57:48 -- common/autotest_common.sh@931 -- # uname 00:05:18.054 07:57:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:18.054 07:57:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 834744 00:05:18.054 07:57:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:18.054 07:57:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:18.054 07:57:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 834744' 00:05:18.054 killing process with pid 834744 00:05:18.054 07:57:48 -- common/autotest_common.sh@945 -- # kill 834744 00:05:18.054 07:57:48 -- common/autotest_common.sh@950 -- # wait 834744 00:05:18.314 00:05:18.314 real 0m1.322s 00:05:18.314 user 0m1.453s 00:05:18.314 sys 0m0.340s 00:05:18.314 07:57:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.314 07:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:18.314 ************************************ 00:05:18.314 END TEST alias_rpc 00:05:18.314 ************************************ 00:05:18.314 07:57:48 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:18.314 07:57:48 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:18.314 07:57:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.314 07:57:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.314 07:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:18.314 ************************************ 00:05:18.314 START TEST spdkcli_tcp 00:05:18.314 ************************************ 00:05:18.314 07:57:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:18.314 * Looking for test storage... 00:05:18.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:18.314 07:57:48 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:18.314 07:57:48 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:18.314 07:57:48 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:18.314 07:57:48 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:18.314 07:57:48 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:18.314 07:57:48 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:18.314 07:57:48 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:18.314 07:57:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:18.314 07:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:18.314 07:57:48 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=835135 00:05:18.315 07:57:48 -- spdkcli/tcp.sh@27 -- # waitforlisten 835135 00:05:18.315 07:57:48 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:18.315 07:57:48 -- common/autotest_common.sh@819 -- # '[' -z 835135 ']' 00:05:18.315 07:57:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.315 07:57:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:18.315 07:57:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.315 07:57:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:18.315 07:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:18.575 [2024-06-11 07:57:49.011323] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:18.575 [2024-06-11 07:57:49.011394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid835135 ] 00:05:18.575 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.575 [2024-06-11 07:57:49.074532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.575 [2024-06-11 07:57:49.144199] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:18.575 [2024-06-11 07:57:49.144470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.575 [2024-06-11 07:57:49.144469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.143 07:57:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:19.143 07:57:49 -- common/autotest_common.sh@852 -- # return 0 00:05:19.143 07:57:49 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:19.143 07:57:49 -- spdkcli/tcp.sh@31 -- # socat_pid=835231 00:05:19.143 07:57:49 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:19.404 [ 00:05:19.404 "bdev_malloc_delete", 00:05:19.404 "bdev_malloc_create", 00:05:19.404 "bdev_null_resize", 00:05:19.404 "bdev_null_delete", 00:05:19.404 "bdev_null_create", 00:05:19.404 "bdev_nvme_cuse_unregister", 00:05:19.404 "bdev_nvme_cuse_register", 00:05:19.404 "bdev_opal_new_user", 00:05:19.404 "bdev_opal_set_lock_state", 00:05:19.404 "bdev_opal_delete", 00:05:19.404 "bdev_opal_get_info", 00:05:19.404 "bdev_opal_create", 00:05:19.404 "bdev_nvme_opal_revert", 00:05:19.404 "bdev_nvme_opal_init", 00:05:19.404 "bdev_nvme_send_cmd", 00:05:19.404 "bdev_nvme_get_path_iostat", 00:05:19.404 "bdev_nvme_get_mdns_discovery_info", 00:05:19.404 "bdev_nvme_stop_mdns_discovery", 00:05:19.404 "bdev_nvme_start_mdns_discovery", 00:05:19.404 "bdev_nvme_set_multipath_policy", 00:05:19.404 "bdev_nvme_set_preferred_path", 00:05:19.404 "bdev_nvme_get_io_paths", 00:05:19.404 "bdev_nvme_remove_error_injection", 00:05:19.404 "bdev_nvme_add_error_injection", 00:05:19.404 "bdev_nvme_get_discovery_info", 00:05:19.404 "bdev_nvme_stop_discovery", 00:05:19.404 "bdev_nvme_start_discovery", 00:05:19.404 "bdev_nvme_get_controller_health_info", 00:05:19.404 "bdev_nvme_disable_controller", 00:05:19.404 "bdev_nvme_enable_controller", 00:05:19.404 "bdev_nvme_reset_controller", 00:05:19.404 "bdev_nvme_get_transport_statistics", 00:05:19.404 "bdev_nvme_apply_firmware", 00:05:19.404 "bdev_nvme_detach_controller", 00:05:19.404 "bdev_nvme_get_controllers", 00:05:19.404 "bdev_nvme_attach_controller", 00:05:19.404 "bdev_nvme_set_hotplug", 00:05:19.404 "bdev_nvme_set_options", 00:05:19.404 "bdev_passthru_delete", 00:05:19.404 "bdev_passthru_create", 00:05:19.404 "bdev_lvol_grow_lvstore", 00:05:19.404 "bdev_lvol_get_lvols", 00:05:19.404 "bdev_lvol_get_lvstores", 00:05:19.404 "bdev_lvol_delete", 00:05:19.404 "bdev_lvol_set_read_only", 00:05:19.404 "bdev_lvol_resize", 00:05:19.404 "bdev_lvol_decouple_parent", 00:05:19.404 "bdev_lvol_inflate", 00:05:19.404 "bdev_lvol_rename", 00:05:19.404 "bdev_lvol_clone_bdev", 00:05:19.404 "bdev_lvol_clone", 00:05:19.404 "bdev_lvol_snapshot", 00:05:19.404 "bdev_lvol_create", 00:05:19.404 "bdev_lvol_delete_lvstore", 00:05:19.404 "bdev_lvol_rename_lvstore", 00:05:19.404 "bdev_lvol_create_lvstore", 00:05:19.405 "bdev_raid_set_options", 00:05:19.405 "bdev_raid_remove_base_bdev", 00:05:19.405 "bdev_raid_add_base_bdev", 00:05:19.405 "bdev_raid_delete", 00:05:19.405 "bdev_raid_create", 00:05:19.405 "bdev_raid_get_bdevs", 00:05:19.405 "bdev_error_inject_error", 00:05:19.405 "bdev_error_delete", 00:05:19.405 "bdev_error_create", 00:05:19.405 "bdev_split_delete", 00:05:19.405 "bdev_split_create", 00:05:19.405 "bdev_delay_delete", 00:05:19.405 "bdev_delay_create", 00:05:19.405 "bdev_delay_update_latency", 00:05:19.405 "bdev_zone_block_delete", 00:05:19.405 "bdev_zone_block_create", 00:05:19.405 "blobfs_create", 00:05:19.405 "blobfs_detect", 00:05:19.405 "blobfs_set_cache_size", 00:05:19.405 "bdev_aio_delete", 00:05:19.405 "bdev_aio_rescan", 00:05:19.405 "bdev_aio_create", 00:05:19.405 "bdev_ftl_set_property", 00:05:19.405 "bdev_ftl_get_properties", 00:05:19.405 "bdev_ftl_get_stats", 00:05:19.405 "bdev_ftl_unmap", 00:05:19.405 "bdev_ftl_unload", 00:05:19.405 "bdev_ftl_delete", 00:05:19.405 "bdev_ftl_load", 00:05:19.405 "bdev_ftl_create", 00:05:19.405 "bdev_virtio_attach_controller", 00:05:19.405 "bdev_virtio_scsi_get_devices", 00:05:19.405 "bdev_virtio_detach_controller", 00:05:19.405 "bdev_virtio_blk_set_hotplug", 00:05:19.405 "bdev_iscsi_delete", 00:05:19.405 "bdev_iscsi_create", 00:05:19.405 "bdev_iscsi_set_options", 00:05:19.405 "accel_error_inject_error", 00:05:19.405 "ioat_scan_accel_module", 00:05:19.405 "dsa_scan_accel_module", 00:05:19.405 "iaa_scan_accel_module", 00:05:19.405 "iscsi_set_options", 00:05:19.405 "iscsi_get_auth_groups", 00:05:19.405 "iscsi_auth_group_remove_secret", 00:05:19.405 "iscsi_auth_group_add_secret", 00:05:19.405 "iscsi_delete_auth_group", 00:05:19.405 "iscsi_create_auth_group", 00:05:19.405 "iscsi_set_discovery_auth", 00:05:19.405 "iscsi_get_options", 00:05:19.405 "iscsi_target_node_request_logout", 00:05:19.405 "iscsi_target_node_set_redirect", 00:05:19.405 "iscsi_target_node_set_auth", 00:05:19.405 "iscsi_target_node_add_lun", 00:05:19.405 "iscsi_get_connections", 00:05:19.405 "iscsi_portal_group_set_auth", 00:05:19.405 "iscsi_start_portal_group", 00:05:19.405 "iscsi_delete_portal_group", 00:05:19.405 "iscsi_create_portal_group", 00:05:19.405 "iscsi_get_portal_groups", 00:05:19.405 "iscsi_delete_target_node", 00:05:19.405 "iscsi_target_node_remove_pg_ig_maps", 00:05:19.405 "iscsi_target_node_add_pg_ig_maps", 00:05:19.405 "iscsi_create_target_node", 00:05:19.405 "iscsi_get_target_nodes", 00:05:19.405 "iscsi_delete_initiator_group", 00:05:19.405 "iscsi_initiator_group_remove_initiators", 00:05:19.405 "iscsi_initiator_group_add_initiators", 00:05:19.405 "iscsi_create_initiator_group", 00:05:19.405 "iscsi_get_initiator_groups", 00:05:19.405 "nvmf_set_crdt", 00:05:19.405 "nvmf_set_config", 00:05:19.405 "nvmf_set_max_subsystems", 00:05:19.405 "nvmf_subsystem_get_listeners", 00:05:19.405 "nvmf_subsystem_get_qpairs", 00:05:19.405 "nvmf_subsystem_get_controllers", 00:05:19.405 "nvmf_get_stats", 00:05:19.405 "nvmf_get_transports", 00:05:19.405 "nvmf_create_transport", 00:05:19.405 "nvmf_get_targets", 00:05:19.405 "nvmf_delete_target", 00:05:19.405 "nvmf_create_target", 00:05:19.405 "nvmf_subsystem_allow_any_host", 00:05:19.405 "nvmf_subsystem_remove_host", 00:05:19.405 "nvmf_subsystem_add_host", 00:05:19.405 "nvmf_subsystem_remove_ns", 00:05:19.405 "nvmf_subsystem_add_ns", 00:05:19.405 "nvmf_subsystem_listener_set_ana_state", 00:05:19.405 "nvmf_discovery_get_referrals", 00:05:19.405 "nvmf_discovery_remove_referral", 00:05:19.405 "nvmf_discovery_add_referral", 00:05:19.405 "nvmf_subsystem_remove_listener", 00:05:19.405 "nvmf_subsystem_add_listener", 00:05:19.405 "nvmf_delete_subsystem", 00:05:19.405 "nvmf_create_subsystem", 00:05:19.405 "nvmf_get_subsystems", 00:05:19.405 "env_dpdk_get_mem_stats", 00:05:19.405 "nbd_get_disks", 00:05:19.405 "nbd_stop_disk", 00:05:19.405 "nbd_start_disk", 00:05:19.405 "ublk_recover_disk", 00:05:19.405 "ublk_get_disks", 00:05:19.405 "ublk_stop_disk", 00:05:19.405 "ublk_start_disk", 00:05:19.405 "ublk_destroy_target", 00:05:19.405 "ublk_create_target", 00:05:19.405 "virtio_blk_create_transport", 00:05:19.405 "virtio_blk_get_transports", 00:05:19.405 "vhost_controller_set_coalescing", 00:05:19.405 "vhost_get_controllers", 00:05:19.405 "vhost_delete_controller", 00:05:19.405 "vhost_create_blk_controller", 00:05:19.405 "vhost_scsi_controller_remove_target", 00:05:19.405 "vhost_scsi_controller_add_target", 00:05:19.405 "vhost_start_scsi_controller", 00:05:19.405 "vhost_create_scsi_controller", 00:05:19.405 "thread_set_cpumask", 00:05:19.405 "framework_get_scheduler", 00:05:19.405 "framework_set_scheduler", 00:05:19.405 "framework_get_reactors", 00:05:19.405 "thread_get_io_channels", 00:05:19.405 "thread_get_pollers", 00:05:19.405 "thread_get_stats", 00:05:19.405 "framework_monitor_context_switch", 00:05:19.405 "spdk_kill_instance", 00:05:19.405 "log_enable_timestamps", 00:05:19.405 "log_get_flags", 00:05:19.405 "log_clear_flag", 00:05:19.405 "log_set_flag", 00:05:19.405 "log_get_level", 00:05:19.405 "log_set_level", 00:05:19.405 "log_get_print_level", 00:05:19.405 "log_set_print_level", 00:05:19.405 "framework_enable_cpumask_locks", 00:05:19.405 "framework_disable_cpumask_locks", 00:05:19.405 "framework_wait_init", 00:05:19.405 "framework_start_init", 00:05:19.405 "scsi_get_devices", 00:05:19.405 "bdev_get_histogram", 00:05:19.405 "bdev_enable_histogram", 00:05:19.405 "bdev_set_qos_limit", 00:05:19.405 "bdev_set_qd_sampling_period", 00:05:19.405 "bdev_get_bdevs", 00:05:19.405 "bdev_reset_iostat", 00:05:19.405 "bdev_get_iostat", 00:05:19.405 "bdev_examine", 00:05:19.405 "bdev_wait_for_examine", 00:05:19.405 "bdev_set_options", 00:05:19.405 "notify_get_notifications", 00:05:19.405 "notify_get_types", 00:05:19.405 "accel_get_stats", 00:05:19.405 "accel_set_options", 00:05:19.405 "accel_set_driver", 00:05:19.405 "accel_crypto_key_destroy", 00:05:19.405 "accel_crypto_keys_get", 00:05:19.405 "accel_crypto_key_create", 00:05:19.405 "accel_assign_opc", 00:05:19.405 "accel_get_module_info", 00:05:19.405 "accel_get_opc_assignments", 00:05:19.405 "vmd_rescan", 00:05:19.405 "vmd_remove_device", 00:05:19.405 "vmd_enable", 00:05:19.405 "sock_set_default_impl", 00:05:19.405 "sock_impl_set_options", 00:05:19.405 "sock_impl_get_options", 00:05:19.405 "iobuf_get_stats", 00:05:19.405 "iobuf_set_options", 00:05:19.405 "framework_get_pci_devices", 00:05:19.405 "framework_get_config", 00:05:19.405 "framework_get_subsystems", 00:05:19.405 "trace_get_info", 00:05:19.405 "trace_get_tpoint_group_mask", 00:05:19.405 "trace_disable_tpoint_group", 00:05:19.405 "trace_enable_tpoint_group", 00:05:19.405 "trace_clear_tpoint_mask", 00:05:19.405 "trace_set_tpoint_mask", 00:05:19.405 "spdk_get_version", 00:05:19.405 "rpc_get_methods" 00:05:19.405 ] 00:05:19.405 07:57:49 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:19.405 07:57:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:19.405 07:57:49 -- common/autotest_common.sh@10 -- # set +x 00:05:19.405 07:57:49 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:19.405 07:57:49 -- spdkcli/tcp.sh@38 -- # killprocess 835135 00:05:19.405 07:57:49 -- common/autotest_common.sh@926 -- # '[' -z 835135 ']' 00:05:19.405 07:57:49 -- common/autotest_common.sh@930 -- # kill -0 835135 00:05:19.405 07:57:49 -- common/autotest_common.sh@931 -- # uname 00:05:19.405 07:57:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:19.405 07:57:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 835135 00:05:19.405 07:57:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:19.405 07:57:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:19.405 07:57:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 835135' 00:05:19.405 killing process with pid 835135 00:05:19.405 07:57:50 -- common/autotest_common.sh@945 -- # kill 835135 00:05:19.405 07:57:50 -- common/autotest_common.sh@950 -- # wait 835135 00:05:19.665 00:05:19.665 real 0m1.371s 00:05:19.665 user 0m2.489s 00:05:19.665 sys 0m0.420s 00:05:19.665 07:57:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.665 07:57:50 -- common/autotest_common.sh@10 -- # set +x 00:05:19.665 ************************************ 00:05:19.665 END TEST spdkcli_tcp 00:05:19.665 ************************************ 00:05:19.665 07:57:50 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:19.665 07:57:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.665 07:57:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.665 07:57:50 -- common/autotest_common.sh@10 -- # set +x 00:05:19.665 ************************************ 00:05:19.665 START TEST dpdk_mem_utility 00:05:19.665 ************************************ 00:05:19.665 07:57:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:19.925 * Looking for test storage... 00:05:19.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:19.925 07:57:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:19.925 07:57:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=835542 00:05:19.925 07:57:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 835542 00:05:19.925 07:57:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.925 07:57:50 -- common/autotest_common.sh@819 -- # '[' -z 835542 ']' 00:05:19.925 07:57:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.925 07:57:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:19.925 07:57:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.925 07:57:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:19.925 07:57:50 -- common/autotest_common.sh@10 -- # set +x 00:05:19.925 [2024-06-11 07:57:50.403278] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:19.925 [2024-06-11 07:57:50.403337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid835542 ] 00:05:19.925 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.925 [2024-06-11 07:57:50.463807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.925 [2024-06-11 07:57:50.529239] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:19.925 [2024-06-11 07:57:50.529361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.864 07:57:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:20.864 07:57:51 -- common/autotest_common.sh@852 -- # return 0 00:05:20.864 07:57:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:20.864 07:57:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:20.864 07:57:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:20.864 07:57:51 -- common/autotest_common.sh@10 -- # set +x 00:05:20.864 { 00:05:20.864 "filename": "/tmp/spdk_mem_dump.txt" 00:05:20.864 } 00:05:20.864 07:57:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:20.864 07:57:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:20.864 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:20.864 1 heaps totaling size 814.000000 MiB 00:05:20.864 size: 814.000000 MiB heap id: 0 00:05:20.864 end heaps---------- 00:05:20.864 8 mempools totaling size 598.116089 MiB 00:05:20.864 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:20.864 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:20.864 size: 84.521057 MiB name: bdev_io_835542 00:05:20.864 size: 51.011292 MiB name: evtpool_835542 00:05:20.864 size: 50.003479 MiB name: msgpool_835542 00:05:20.864 size: 21.763794 MiB name: PDU_Pool 00:05:20.864 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:20.864 size: 0.026123 MiB name: Session_Pool 00:05:20.864 end mempools------- 00:05:20.864 6 memzones totaling size 4.142822 MiB 00:05:20.864 size: 1.000366 MiB name: RG_ring_0_835542 00:05:20.864 size: 1.000366 MiB name: RG_ring_1_835542 00:05:20.864 size: 1.000366 MiB name: RG_ring_4_835542 00:05:20.864 size: 1.000366 MiB name: RG_ring_5_835542 00:05:20.864 size: 0.125366 MiB name: RG_ring_2_835542 00:05:20.864 size: 0.015991 MiB name: RG_ring_3_835542 00:05:20.864 end memzones------- 00:05:20.864 07:57:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:20.864 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:20.864 list of free elements. size: 12.519348 MiB 00:05:20.864 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:20.864 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:20.864 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:20.864 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:20.864 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:20.864 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:20.864 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:20.864 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:20.864 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:20.864 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:20.864 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:20.864 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:20.864 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:20.864 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:20.864 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:20.864 list of standard malloc elements. size: 199.218079 MiB 00:05:20.864 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:20.864 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:20.864 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:20.864 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:20.864 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:20.864 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:20.864 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:20.864 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:20.864 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:20.864 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:20.864 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:20.864 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:20.864 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:20.864 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:20.864 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:20.864 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:20.864 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:20.864 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:20.864 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:20.864 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:20.864 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:20.864 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:20.864 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:20.864 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:20.864 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:20.864 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:20.864 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:20.864 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:20.864 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:20.864 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:20.864 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:20.864 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:20.864 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:20.864 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:20.864 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:20.864 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:20.864 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:20.864 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:20.864 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:20.864 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:20.864 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:20.864 list of memzone associated elements. size: 602.262573 MiB 00:05:20.864 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:20.864 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:20.864 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:20.864 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:20.864 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:20.864 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_835542_0 00:05:20.864 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:20.864 associated memzone info: size: 48.002930 MiB name: MP_evtpool_835542_0 00:05:20.864 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:20.864 associated memzone info: size: 48.002930 MiB name: MP_msgpool_835542_0 00:05:20.864 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:20.865 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:20.865 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:20.865 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:20.865 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:20.865 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_835542 00:05:20.865 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:20.865 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_835542 00:05:20.865 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:20.865 associated memzone info: size: 1.007996 MiB name: MP_evtpool_835542 00:05:20.865 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:20.865 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:20.865 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:20.865 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:20.865 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:20.865 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:20.865 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:20.865 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:20.865 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:20.865 associated memzone info: size: 1.000366 MiB name: RG_ring_0_835542 00:05:20.865 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:20.865 associated memzone info: size: 1.000366 MiB name: RG_ring_1_835542 00:05:20.865 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:20.865 associated memzone info: size: 1.000366 MiB name: RG_ring_4_835542 00:05:20.865 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:20.865 associated memzone info: size: 1.000366 MiB name: RG_ring_5_835542 00:05:20.865 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:20.865 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_835542 00:05:20.865 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:20.865 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:20.865 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:20.865 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:20.865 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:20.865 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:20.865 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:20.865 associated memzone info: size: 0.125366 MiB name: RG_ring_2_835542 00:05:20.865 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:20.865 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:20.865 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:20.865 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:20.865 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:20.865 associated memzone info: size: 0.015991 MiB name: RG_ring_3_835542 00:05:20.865 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:20.865 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:20.865 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:20.865 associated memzone info: size: 0.000183 MiB name: MP_msgpool_835542 00:05:20.865 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:20.865 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_835542 00:05:20.865 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:20.865 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:20.865 07:57:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:20.865 07:57:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 835542 00:05:20.865 07:57:51 -- common/autotest_common.sh@926 -- # '[' -z 835542 ']' 00:05:20.865 07:57:51 -- common/autotest_common.sh@930 -- # kill -0 835542 00:05:20.865 07:57:51 -- common/autotest_common.sh@931 -- # uname 00:05:20.865 07:57:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:20.865 07:57:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 835542 00:05:20.865 07:57:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:20.865 07:57:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:20.865 07:57:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 835542' 00:05:20.865 killing process with pid 835542 00:05:20.865 07:57:51 -- common/autotest_common.sh@945 -- # kill 835542 00:05:20.865 07:57:51 -- common/autotest_common.sh@950 -- # wait 835542 00:05:20.865 00:05:20.865 real 0m1.242s 00:05:20.865 user 0m1.313s 00:05:20.865 sys 0m0.343s 00:05:20.865 07:57:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.865 07:57:51 -- common/autotest_common.sh@10 -- # set +x 00:05:20.865 ************************************ 00:05:20.865 END TEST dpdk_mem_utility 00:05:20.865 ************************************ 00:05:21.124 07:57:51 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:21.125 07:57:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:21.125 07:57:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.125 07:57:51 -- common/autotest_common.sh@10 -- # set +x 00:05:21.125 ************************************ 00:05:21.125 START TEST event 00:05:21.125 ************************************ 00:05:21.125 07:57:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:21.125 * Looking for test storage... 00:05:21.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:21.125 07:57:51 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:21.125 07:57:51 -- bdev/nbd_common.sh@6 -- # set -e 00:05:21.125 07:57:51 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:21.125 07:57:51 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:21.125 07:57:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.125 07:57:51 -- common/autotest_common.sh@10 -- # set +x 00:05:21.125 ************************************ 00:05:21.125 START TEST event_perf 00:05:21.125 ************************************ 00:05:21.125 07:57:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:21.125 Running I/O for 1 seconds...[2024-06-11 07:57:51.668823] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:21.125 [2024-06-11 07:57:51.668932] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid835797 ] 00:05:21.125 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.125 [2024-06-11 07:57:51.738644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:21.384 [2024-06-11 07:57:51.813792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.384 [2024-06-11 07:57:51.813908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.384 [2024-06-11 07:57:51.814067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.384 [2024-06-11 07:57:51.814067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.323 Running I/O for 1 seconds... 00:05:22.323 lcore 0: 170108 00:05:22.323 lcore 1: 170109 00:05:22.323 lcore 2: 170109 00:05:22.323 lcore 3: 170110 00:05:22.323 done. 00:05:22.323 00:05:22.323 real 0m1.220s 00:05:22.323 user 0m4.135s 00:05:22.323 sys 0m0.082s 00:05:22.323 07:57:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.323 07:57:52 -- common/autotest_common.sh@10 -- # set +x 00:05:22.323 ************************************ 00:05:22.323 END TEST event_perf 00:05:22.323 ************************************ 00:05:22.323 07:57:52 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:22.323 07:57:52 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:22.323 07:57:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.323 07:57:52 -- common/autotest_common.sh@10 -- # set +x 00:05:22.323 ************************************ 00:05:22.323 START TEST event_reactor 00:05:22.323 ************************************ 00:05:22.323 07:57:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:22.323 [2024-06-11 07:57:52.932384] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:22.323 [2024-06-11 07:57:52.932494] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid835979 ] 00:05:22.323 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.584 [2024-06-11 07:57:52.997444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.584 [2024-06-11 07:57:53.062179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.523 test_start 00:05:23.523 oneshot 00:05:23.523 tick 100 00:05:23.523 tick 100 00:05:23.523 tick 250 00:05:23.523 tick 100 00:05:23.524 tick 100 00:05:23.524 tick 100 00:05:23.524 tick 250 00:05:23.524 tick 500 00:05:23.524 tick 100 00:05:23.524 tick 100 00:05:23.524 tick 250 00:05:23.524 tick 100 00:05:23.524 tick 100 00:05:23.524 test_end 00:05:23.524 00:05:23.524 real 0m1.203s 00:05:23.524 user 0m1.128s 00:05:23.524 sys 0m0.070s 00:05:23.524 07:57:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.524 07:57:54 -- common/autotest_common.sh@10 -- # set +x 00:05:23.524 ************************************ 00:05:23.524 END TEST event_reactor 00:05:23.524 ************************************ 00:05:23.524 07:57:54 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.524 07:57:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:23.524 07:57:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.524 07:57:54 -- common/autotest_common.sh@10 -- # set +x 00:05:23.524 ************************************ 00:05:23.524 START TEST event_reactor_perf 00:05:23.524 ************************************ 00:05:23.524 07:57:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.783 [2024-06-11 07:57:54.179019] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:23.783 [2024-06-11 07:57:54.179128] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid836318 ] 00:05:23.783 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.783 [2024-06-11 07:57:54.242193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.783 [2024-06-11 07:57:54.304299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.723 test_start 00:05:24.723 test_end 00:05:24.723 Performance: 366890 events per second 00:05:24.723 00:05:24.723 real 0m1.198s 00:05:24.723 user 0m1.131s 00:05:24.723 sys 0m0.063s 00:05:24.723 07:57:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.723 07:57:55 -- common/autotest_common.sh@10 -- # set +x 00:05:24.723 ************************************ 00:05:24.723 END TEST event_reactor_perf 00:05:24.723 ************************************ 00:05:24.983 07:57:55 -- event/event.sh@49 -- # uname -s 00:05:24.983 07:57:55 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:24.983 07:57:55 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:24.984 07:57:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.984 07:57:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.984 07:57:55 -- common/autotest_common.sh@10 -- # set +x 00:05:24.984 ************************************ 00:05:24.984 START TEST event_scheduler 00:05:24.984 ************************************ 00:05:24.984 07:57:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:24.984 * Looking for test storage... 00:05:24.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:24.984 07:57:55 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:24.984 07:57:55 -- scheduler/scheduler.sh@35 -- # scheduler_pid=836699 00:05:24.984 07:57:55 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.984 07:57:55 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:24.984 07:57:55 -- scheduler/scheduler.sh@37 -- # waitforlisten 836699 00:05:24.984 07:57:55 -- common/autotest_common.sh@819 -- # '[' -z 836699 ']' 00:05:24.984 07:57:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.984 07:57:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:24.984 07:57:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.984 07:57:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:24.984 07:57:55 -- common/autotest_common.sh@10 -- # set +x 00:05:24.984 [2024-06-11 07:57:55.536846] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:24.984 [2024-06-11 07:57:55.536907] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid836699 ] 00:05:24.984 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.984 [2024-06-11 07:57:55.590799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:25.244 [2024-06-11 07:57:55.647296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.244 [2024-06-11 07:57:55.647471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.244 [2024-06-11 07:57:55.647561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.244 [2024-06-11 07:57:55.647562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.815 07:57:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:25.815 07:57:56 -- common/autotest_common.sh@852 -- # return 0 00:05:25.815 07:57:56 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:25.815 07:57:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.815 07:57:56 -- common/autotest_common.sh@10 -- # set +x 00:05:25.815 POWER: Env isn't set yet! 00:05:25.815 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:25.815 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.815 POWER: Cannot set governor of lcore 0 to userspace 00:05:25.815 POWER: Attempting to initialise PSTAT power management... 00:05:25.815 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:25.815 POWER: Initialized successfully for lcore 0 power management 00:05:25.815 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:25.815 POWER: Initialized successfully for lcore 1 power management 00:05:25.815 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:25.815 POWER: Initialized successfully for lcore 2 power management 00:05:25.815 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:25.815 POWER: Initialized successfully for lcore 3 power management 00:05:25.815 07:57:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.815 07:57:56 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:25.815 07:57:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.815 07:57:56 -- common/autotest_common.sh@10 -- # set +x 00:05:25.815 [2024-06-11 07:57:56.427771] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:25.815 07:57:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.815 07:57:56 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:25.815 07:57:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.815 07:57:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.815 07:57:56 -- common/autotest_common.sh@10 -- # set +x 00:05:25.815 ************************************ 00:05:25.815 START TEST scheduler_create_thread 00:05:25.815 ************************************ 00:05:25.815 07:57:56 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:25.815 07:57:56 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:25.815 07:57:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.815 07:57:56 -- common/autotest_common.sh@10 -- # set +x 00:05:25.815 2 00:05:25.815 07:57:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:25.815 07:57:56 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:25.815 07:57:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:25.815 07:57:56 -- common/autotest_common.sh@10 -- # set +x 00:05:26.076 3 00:05:26.076 07:57:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.076 07:57:56 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:26.076 07:57:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.076 07:57:56 -- common/autotest_common.sh@10 -- # set +x 00:05:26.076 4 00:05:26.076 07:57:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.076 07:57:56 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:26.076 07:57:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.076 07:57:56 -- common/autotest_common.sh@10 -- # set +x 00:05:26.076 5 00:05:26.076 07:57:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.076 07:57:56 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:26.076 07:57:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.076 07:57:56 -- common/autotest_common.sh@10 -- # set +x 00:05:26.076 6 00:05:26.076 07:57:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.076 07:57:56 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:26.076 07:57:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.076 07:57:56 -- common/autotest_common.sh@10 -- # set +x 00:05:26.076 7 00:05:26.076 07:57:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.076 07:57:56 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:26.076 07:57:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.076 07:57:56 -- common/autotest_common.sh@10 -- # set +x 00:05:26.076 8 00:05:26.076 07:57:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.076 07:57:56 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:26.076 07:57:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.076 07:57:56 -- common/autotest_common.sh@10 -- # set +x 00:05:27.019 9 00:05:27.019 07:57:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.019 07:57:57 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:27.019 07:57:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.019 07:57:57 -- common/autotest_common.sh@10 -- # set +x 00:05:27.960 10 00:05:27.960 07:57:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.960 07:57:58 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:27.960 07:57:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.960 07:57:58 -- common/autotest_common.sh@10 -- # set +x 00:05:28.899 07:57:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:28.899 07:57:59 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:28.899 07:57:59 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:28.899 07:57:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:28.899 07:57:59 -- common/autotest_common.sh@10 -- # set +x 00:05:29.471 07:58:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:29.471 07:58:00 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:29.471 07:58:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:29.471 07:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:30.413 07:58:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.413 07:58:00 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:30.413 07:58:00 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:30.413 07:58:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.413 07:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:30.674 07:58:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.674 00:05:30.674 real 0m4.867s 00:05:30.674 user 0m0.028s 00:05:30.674 sys 0m0.002s 00:05:30.674 07:58:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.674 07:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:30.674 ************************************ 00:05:30.674 END TEST scheduler_create_thread 00:05:30.674 ************************************ 00:05:30.936 07:58:01 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:30.936 07:58:01 -- scheduler/scheduler.sh@46 -- # killprocess 836699 00:05:30.936 07:58:01 -- common/autotest_common.sh@926 -- # '[' -z 836699 ']' 00:05:30.936 07:58:01 -- common/autotest_common.sh@930 -- # kill -0 836699 00:05:30.936 07:58:01 -- common/autotest_common.sh@931 -- # uname 00:05:30.936 07:58:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:30.936 07:58:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 836699 00:05:30.936 07:58:01 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:30.936 07:58:01 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:30.936 07:58:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 836699' 00:05:30.936 killing process with pid 836699 00:05:30.936 07:58:01 -- common/autotest_common.sh@945 -- # kill 836699 00:05:30.936 07:58:01 -- common/autotest_common.sh@950 -- # wait 836699 00:05:31.197 [2024-06-11 07:58:01.583679] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:31.197 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:31.197 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:31.197 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:31.197 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:31.198 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:31.198 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:31.198 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:31.198 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:31.198 00:05:31.198 real 0m6.340s 00:05:31.198 user 0m14.714s 00:05:31.198 sys 0m0.299s 00:05:31.198 07:58:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.198 07:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:31.198 ************************************ 00:05:31.198 END TEST event_scheduler 00:05:31.198 ************************************ 00:05:31.198 07:58:01 -- event/event.sh@51 -- # modprobe -n nbd 00:05:31.198 07:58:01 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:31.198 07:58:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.198 07:58:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.198 07:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:31.198 ************************************ 00:05:31.198 START TEST app_repeat 00:05:31.198 ************************************ 00:05:31.198 07:58:01 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:31.198 07:58:01 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.198 07:58:01 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.198 07:58:01 -- event/event.sh@13 -- # local nbd_list 00:05:31.198 07:58:01 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.198 07:58:01 -- event/event.sh@14 -- # local bdev_list 00:05:31.198 07:58:01 -- event/event.sh@15 -- # local repeat_times=4 00:05:31.198 07:58:01 -- event/event.sh@17 -- # modprobe nbd 00:05:31.198 07:58:01 -- event/event.sh@19 -- # repeat_pid=838028 00:05:31.198 07:58:01 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.198 07:58:01 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:31.198 07:58:01 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 838028' 00:05:31.198 Process app_repeat pid: 838028 00:05:31.198 07:58:01 -- event/event.sh@23 -- # for i in {0..2} 00:05:31.198 07:58:01 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:31.198 spdk_app_start Round 0 00:05:31.198 07:58:01 -- event/event.sh@25 -- # waitforlisten 838028 /var/tmp/spdk-nbd.sock 00:05:31.198 07:58:01 -- common/autotest_common.sh@819 -- # '[' -z 838028 ']' 00:05:31.198 07:58:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.198 07:58:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:31.198 07:58:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.198 07:58:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:31.198 07:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:31.198 [2024-06-11 07:58:01.832714] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:31.198 [2024-06-11 07:58:01.832803] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid838028 ] 00:05:31.458 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.458 [2024-06-11 07:58:01.897958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.458 [2024-06-11 07:58:01.969032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.458 [2024-06-11 07:58:01.969035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.029 07:58:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:32.029 07:58:02 -- common/autotest_common.sh@852 -- # return 0 00:05:32.029 07:58:02 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.289 Malloc0 00:05:32.290 07:58:02 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.290 Malloc1 00:05:32.290 07:58:02 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.290 07:58:02 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.290 07:58:02 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.290 07:58:02 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:32.290 07:58:02 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.290 07:58:02 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:32.290 07:58:02 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.290 07:58:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.290 07:58:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.290 07:58:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:32.290 07:58:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.290 07:58:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:32.290 07:58:02 -- bdev/nbd_common.sh@12 -- # local i 00:05:32.290 07:58:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:32.290 07:58:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.290 07:58:02 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:32.551 /dev/nbd0 00:05:32.551 07:58:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.551 07:58:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.551 07:58:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:32.551 07:58:03 -- common/autotest_common.sh@857 -- # local i 00:05:32.551 07:58:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:32.551 07:58:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:32.551 07:58:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:32.551 07:58:03 -- common/autotest_common.sh@861 -- # break 00:05:32.551 07:58:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:32.551 07:58:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:32.551 07:58:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.551 1+0 records in 00:05:32.551 1+0 records out 00:05:32.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256253 s, 16.0 MB/s 00:05:32.551 07:58:03 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.551 07:58:03 -- common/autotest_common.sh@874 -- # size=4096 00:05:32.551 07:58:03 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.551 07:58:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:32.551 07:58:03 -- common/autotest_common.sh@877 -- # return 0 00:05:32.551 07:58:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.551 07:58:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.551 07:58:03 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:32.813 /dev/nbd1 00:05:32.813 07:58:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:32.813 07:58:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:32.813 07:58:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:32.813 07:58:03 -- common/autotest_common.sh@857 -- # local i 00:05:32.813 07:58:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:32.813 07:58:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:32.813 07:58:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:32.813 07:58:03 -- common/autotest_common.sh@861 -- # break 00:05:32.813 07:58:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:32.813 07:58:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:32.813 07:58:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.813 1+0 records in 00:05:32.813 1+0 records out 00:05:32.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298153 s, 13.7 MB/s 00:05:32.813 07:58:03 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.813 07:58:03 -- common/autotest_common.sh@874 -- # size=4096 00:05:32.813 07:58:03 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.813 07:58:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:32.813 07:58:03 -- common/autotest_common.sh@877 -- # return 0 00:05:32.813 07:58:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.813 07:58:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.813 07:58:03 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.813 07:58:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.813 07:58:03 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:33.075 { 00:05:33.075 "nbd_device": "/dev/nbd0", 00:05:33.075 "bdev_name": "Malloc0" 00:05:33.075 }, 00:05:33.075 { 00:05:33.075 "nbd_device": "/dev/nbd1", 00:05:33.075 "bdev_name": "Malloc1" 00:05:33.075 } 00:05:33.075 ]' 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:33.075 { 00:05:33.075 "nbd_device": "/dev/nbd0", 00:05:33.075 "bdev_name": "Malloc0" 00:05:33.075 }, 00:05:33.075 { 00:05:33.075 "nbd_device": "/dev/nbd1", 00:05:33.075 "bdev_name": "Malloc1" 00:05:33.075 } 00:05:33.075 ]' 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:33.075 /dev/nbd1' 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:33.075 /dev/nbd1' 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@65 -- # count=2 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@95 -- # count=2 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:33.075 256+0 records in 00:05:33.075 256+0 records out 00:05:33.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124947 s, 83.9 MB/s 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:33.075 256+0 records in 00:05:33.075 256+0 records out 00:05:33.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159205 s, 65.9 MB/s 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:33.075 256+0 records in 00:05:33.075 256+0 records out 00:05:33.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177621 s, 59.0 MB/s 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@51 -- # local i 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.075 07:58:03 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:33.337 07:58:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:33.337 07:58:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:33.337 07:58:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:33.337 07:58:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.337 07:58:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.337 07:58:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:33.337 07:58:03 -- bdev/nbd_common.sh@41 -- # break 00:05:33.337 07:58:03 -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.337 07:58:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.337 07:58:03 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:33.337 07:58:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:33.337 07:58:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:33.337 07:58:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:33.337 07:58:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.337 07:58:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.337 07:58:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:33.337 07:58:03 -- bdev/nbd_common.sh@41 -- # break 00:05:33.337 07:58:03 -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.337 07:58:03 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.337 07:58:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.337 07:58:03 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.598 07:58:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:33.598 07:58:04 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:33.598 07:58:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.598 07:58:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:33.598 07:58:04 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:33.598 07:58:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.598 07:58:04 -- bdev/nbd_common.sh@65 -- # true 00:05:33.598 07:58:04 -- bdev/nbd_common.sh@65 -- # count=0 00:05:33.598 07:58:04 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:33.598 07:58:04 -- bdev/nbd_common.sh@104 -- # count=0 00:05:33.598 07:58:04 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:33.598 07:58:04 -- bdev/nbd_common.sh@109 -- # return 0 00:05:33.598 07:58:04 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:33.858 07:58:04 -- event/event.sh@35 -- # sleep 3 00:05:33.858 [2024-06-11 07:58:04.411563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.858 [2024-06-11 07:58:04.473212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.858 [2024-06-11 07:58:04.473214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.858 [2024-06-11 07:58:04.504672] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:33.858 [2024-06-11 07:58:04.504707] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:37.159 07:58:07 -- event/event.sh@23 -- # for i in {0..2} 00:05:37.159 07:58:07 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:37.159 spdk_app_start Round 1 00:05:37.159 07:58:07 -- event/event.sh@25 -- # waitforlisten 838028 /var/tmp/spdk-nbd.sock 00:05:37.159 07:58:07 -- common/autotest_common.sh@819 -- # '[' -z 838028 ']' 00:05:37.159 07:58:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.159 07:58:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:37.159 07:58:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.159 07:58:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:37.159 07:58:07 -- common/autotest_common.sh@10 -- # set +x 00:05:37.159 07:58:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:37.159 07:58:07 -- common/autotest_common.sh@852 -- # return 0 00:05:37.160 07:58:07 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.160 Malloc0 00:05:37.160 07:58:07 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.160 Malloc1 00:05:37.160 07:58:07 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.160 07:58:07 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.160 07:58:07 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.160 07:58:07 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.160 07:58:07 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.160 07:58:07 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.160 07:58:07 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.160 07:58:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.160 07:58:07 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.160 07:58:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.160 07:58:07 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.160 07:58:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.160 07:58:07 -- bdev/nbd_common.sh@12 -- # local i 00:05:37.160 07:58:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.160 07:58:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.160 07:58:07 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:37.422 /dev/nbd0 00:05:37.422 07:58:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:37.422 07:58:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:37.422 07:58:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:37.422 07:58:07 -- common/autotest_common.sh@857 -- # local i 00:05:37.422 07:58:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:37.422 07:58:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:37.422 07:58:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:37.422 07:58:07 -- common/autotest_common.sh@861 -- # break 00:05:37.422 07:58:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:37.422 07:58:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:37.422 07:58:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.422 1+0 records in 00:05:37.422 1+0 records out 00:05:37.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268767 s, 15.2 MB/s 00:05:37.422 07:58:07 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.422 07:58:07 -- common/autotest_common.sh@874 -- # size=4096 00:05:37.422 07:58:07 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.422 07:58:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:37.422 07:58:07 -- common/autotest_common.sh@877 -- # return 0 00:05:37.422 07:58:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.422 07:58:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.422 07:58:07 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:37.683 /dev/nbd1 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:37.683 07:58:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:37.683 07:58:08 -- common/autotest_common.sh@857 -- # local i 00:05:37.683 07:58:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:37.683 07:58:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:37.683 07:58:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:37.683 07:58:08 -- common/autotest_common.sh@861 -- # break 00:05:37.683 07:58:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:37.683 07:58:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:37.683 07:58:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.683 1+0 records in 00:05:37.683 1+0 records out 00:05:37.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390297 s, 10.5 MB/s 00:05:37.683 07:58:08 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.683 07:58:08 -- common/autotest_common.sh@874 -- # size=4096 00:05:37.683 07:58:08 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.683 07:58:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:37.683 07:58:08 -- common/autotest_common.sh@877 -- # return 0 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:37.683 { 00:05:37.683 "nbd_device": "/dev/nbd0", 00:05:37.683 "bdev_name": "Malloc0" 00:05:37.683 }, 00:05:37.683 { 00:05:37.683 "nbd_device": "/dev/nbd1", 00:05:37.683 "bdev_name": "Malloc1" 00:05:37.683 } 00:05:37.683 ]' 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:37.683 { 00:05:37.683 "nbd_device": "/dev/nbd0", 00:05:37.683 "bdev_name": "Malloc0" 00:05:37.683 }, 00:05:37.683 { 00:05:37.683 "nbd_device": "/dev/nbd1", 00:05:37.683 "bdev_name": "Malloc1" 00:05:37.683 } 00:05:37.683 ]' 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:37.683 /dev/nbd1' 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:37.683 /dev/nbd1' 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@65 -- # count=2 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@95 -- # count=2 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:37.683 07:58:08 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:37.945 256+0 records in 00:05:37.945 256+0 records out 00:05:37.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124955 s, 83.9 MB/s 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:37.945 256+0 records in 00:05:37.945 256+0 records out 00:05:37.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160633 s, 65.3 MB/s 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:37.945 256+0 records in 00:05:37.945 256+0 records out 00:05:37.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166935 s, 62.8 MB/s 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@51 -- # local i 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@41 -- # break 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.945 07:58:08 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:38.206 07:58:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:38.206 07:58:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:38.206 07:58:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:38.206 07:58:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.206 07:58:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.206 07:58:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:38.206 07:58:08 -- bdev/nbd_common.sh@41 -- # break 00:05:38.206 07:58:08 -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.206 07:58:08 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.206 07:58:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.206 07:58:08 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.467 07:58:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:38.467 07:58:08 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:38.467 07:58:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.467 07:58:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:38.467 07:58:08 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:38.467 07:58:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.467 07:58:08 -- bdev/nbd_common.sh@65 -- # true 00:05:38.467 07:58:08 -- bdev/nbd_common.sh@65 -- # count=0 00:05:38.467 07:58:08 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:38.467 07:58:08 -- bdev/nbd_common.sh@104 -- # count=0 00:05:38.467 07:58:08 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:38.467 07:58:08 -- bdev/nbd_common.sh@109 -- # return 0 00:05:38.467 07:58:08 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:38.467 07:58:09 -- event/event.sh@35 -- # sleep 3 00:05:38.727 [2024-06-11 07:58:09.227468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.727 [2024-06-11 07:58:09.288877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.727 [2024-06-11 07:58:09.288881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.727 [2024-06-11 07:58:09.320141] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:38.727 [2024-06-11 07:58:09.320176] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:42.028 07:58:12 -- event/event.sh@23 -- # for i in {0..2} 00:05:42.029 07:58:12 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:42.029 spdk_app_start Round 2 00:05:42.029 07:58:12 -- event/event.sh@25 -- # waitforlisten 838028 /var/tmp/spdk-nbd.sock 00:05:42.029 07:58:12 -- common/autotest_common.sh@819 -- # '[' -z 838028 ']' 00:05:42.029 07:58:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.029 07:58:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:42.029 07:58:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.029 07:58:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:42.029 07:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.029 07:58:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:42.029 07:58:12 -- common/autotest_common.sh@852 -- # return 0 00:05:42.029 07:58:12 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.029 Malloc0 00:05:42.029 07:58:12 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.029 Malloc1 00:05:42.029 07:58:12 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.029 07:58:12 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.029 07:58:12 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.029 07:58:12 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:42.029 07:58:12 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.029 07:58:12 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:42.029 07:58:12 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.029 07:58:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.029 07:58:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.029 07:58:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:42.029 07:58:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.029 07:58:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:42.029 07:58:12 -- bdev/nbd_common.sh@12 -- # local i 00:05:42.029 07:58:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:42.029 07:58:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.029 07:58:12 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:42.289 /dev/nbd0 00:05:42.289 07:58:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:42.289 07:58:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:42.289 07:58:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:42.289 07:58:12 -- common/autotest_common.sh@857 -- # local i 00:05:42.289 07:58:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:42.289 07:58:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:42.289 07:58:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:42.289 07:58:12 -- common/autotest_common.sh@861 -- # break 00:05:42.289 07:58:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:42.289 07:58:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:42.289 07:58:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.289 1+0 records in 00:05:42.289 1+0 records out 00:05:42.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246076 s, 16.6 MB/s 00:05:42.289 07:58:12 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.289 07:58:12 -- common/autotest_common.sh@874 -- # size=4096 00:05:42.289 07:58:12 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.289 07:58:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:42.289 07:58:12 -- common/autotest_common.sh@877 -- # return 0 00:05:42.289 07:58:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.289 07:58:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.290 07:58:12 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:42.290 /dev/nbd1 00:05:42.290 07:58:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:42.290 07:58:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:42.290 07:58:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:42.290 07:58:12 -- common/autotest_common.sh@857 -- # local i 00:05:42.290 07:58:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:42.290 07:58:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:42.290 07:58:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:42.290 07:58:12 -- common/autotest_common.sh@861 -- # break 00:05:42.290 07:58:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:42.290 07:58:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:42.290 07:58:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.290 1+0 records in 00:05:42.290 1+0 records out 00:05:42.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281289 s, 14.6 MB/s 00:05:42.290 07:58:12 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.290 07:58:12 -- common/autotest_common.sh@874 -- # size=4096 00:05:42.290 07:58:12 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:42.290 07:58:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:42.290 07:58:12 -- common/autotest_common.sh@877 -- # return 0 00:05:42.290 07:58:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.290 07:58:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.290 07:58:12 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.290 07:58:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.290 07:58:12 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:42.551 { 00:05:42.551 "nbd_device": "/dev/nbd0", 00:05:42.551 "bdev_name": "Malloc0" 00:05:42.551 }, 00:05:42.551 { 00:05:42.551 "nbd_device": "/dev/nbd1", 00:05:42.551 "bdev_name": "Malloc1" 00:05:42.551 } 00:05:42.551 ]' 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:42.551 { 00:05:42.551 "nbd_device": "/dev/nbd0", 00:05:42.551 "bdev_name": "Malloc0" 00:05:42.551 }, 00:05:42.551 { 00:05:42.551 "nbd_device": "/dev/nbd1", 00:05:42.551 "bdev_name": "Malloc1" 00:05:42.551 } 00:05:42.551 ]' 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:42.551 /dev/nbd1' 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:42.551 /dev/nbd1' 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@65 -- # count=2 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@95 -- # count=2 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:42.551 256+0 records in 00:05:42.551 256+0 records out 00:05:42.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122917 s, 85.3 MB/s 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:42.551 256+0 records in 00:05:42.551 256+0 records out 00:05:42.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163394 s, 64.2 MB/s 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:42.551 256+0 records in 00:05:42.551 256+0 records out 00:05:42.551 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169547 s, 61.8 MB/s 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@51 -- # local i 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.551 07:58:13 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:42.811 07:58:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:42.811 07:58:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:42.811 07:58:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:42.811 07:58:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.811 07:58:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.811 07:58:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:42.811 07:58:13 -- bdev/nbd_common.sh@41 -- # break 00:05:42.811 07:58:13 -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.811 07:58:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.811 07:58:13 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@41 -- # break 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@65 -- # true 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@65 -- # count=0 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@104 -- # count=0 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:43.073 07:58:13 -- bdev/nbd_common.sh@109 -- # return 0 00:05:43.073 07:58:13 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:43.333 07:58:13 -- event/event.sh@35 -- # sleep 3 00:05:43.593 [2024-06-11 07:58:13.984855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.593 [2024-06-11 07:58:14.046513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.593 [2024-06-11 07:58:14.046514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.593 [2024-06-11 07:58:14.077856] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:43.593 [2024-06-11 07:58:14.077890] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:46.888 07:58:16 -- event/event.sh@38 -- # waitforlisten 838028 /var/tmp/spdk-nbd.sock 00:05:46.888 07:58:16 -- common/autotest_common.sh@819 -- # '[' -z 838028 ']' 00:05:46.888 07:58:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.888 07:58:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:46.888 07:58:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.888 07:58:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:46.888 07:58:16 -- common/autotest_common.sh@10 -- # set +x 00:05:46.888 07:58:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:46.888 07:58:17 -- common/autotest_common.sh@852 -- # return 0 00:05:46.888 07:58:17 -- event/event.sh@39 -- # killprocess 838028 00:05:46.888 07:58:17 -- common/autotest_common.sh@926 -- # '[' -z 838028 ']' 00:05:46.888 07:58:17 -- common/autotest_common.sh@930 -- # kill -0 838028 00:05:46.888 07:58:17 -- common/autotest_common.sh@931 -- # uname 00:05:46.888 07:58:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:46.888 07:58:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 838028 00:05:46.888 07:58:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:46.888 07:58:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:46.888 07:58:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 838028' 00:05:46.888 killing process with pid 838028 00:05:46.888 07:58:17 -- common/autotest_common.sh@945 -- # kill 838028 00:05:46.888 07:58:17 -- common/autotest_common.sh@950 -- # wait 838028 00:05:46.888 spdk_app_start is called in Round 0. 00:05:46.888 Shutdown signal received, stop current app iteration 00:05:46.888 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:05:46.888 spdk_app_start is called in Round 1. 00:05:46.888 Shutdown signal received, stop current app iteration 00:05:46.888 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:05:46.888 spdk_app_start is called in Round 2. 00:05:46.888 Shutdown signal received, stop current app iteration 00:05:46.888 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:05:46.888 spdk_app_start is called in Round 3. 00:05:46.888 Shutdown signal received, stop current app iteration 00:05:46.888 07:58:17 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:46.888 07:58:17 -- event/event.sh@42 -- # return 0 00:05:46.888 00:05:46.888 real 0m15.375s 00:05:46.889 user 0m33.170s 00:05:46.889 sys 0m2.038s 00:05:46.889 07:58:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.889 07:58:17 -- common/autotest_common.sh@10 -- # set +x 00:05:46.889 ************************************ 00:05:46.889 END TEST app_repeat 00:05:46.889 ************************************ 00:05:46.889 07:58:17 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:46.889 07:58:17 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:46.889 07:58:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.889 07:58:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.889 07:58:17 -- common/autotest_common.sh@10 -- # set +x 00:05:46.889 ************************************ 00:05:46.889 START TEST cpu_locks 00:05:46.889 ************************************ 00:05:46.889 07:58:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:46.889 * Looking for test storage... 00:05:46.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:46.889 07:58:17 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:46.889 07:58:17 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:46.889 07:58:17 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:46.889 07:58:17 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:46.889 07:58:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.889 07:58:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.889 07:58:17 -- common/autotest_common.sh@10 -- # set +x 00:05:46.889 ************************************ 00:05:46.889 START TEST default_locks 00:05:46.889 ************************************ 00:05:46.889 07:58:17 -- common/autotest_common.sh@1104 -- # default_locks 00:05:46.889 07:58:17 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=841385 00:05:46.889 07:58:17 -- event/cpu_locks.sh@47 -- # waitforlisten 841385 00:05:46.889 07:58:17 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.889 07:58:17 -- common/autotest_common.sh@819 -- # '[' -z 841385 ']' 00:05:46.889 07:58:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.889 07:58:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:46.889 07:58:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.889 07:58:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:46.889 07:58:17 -- common/autotest_common.sh@10 -- # set +x 00:05:46.889 [2024-06-11 07:58:17.366606] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:46.889 [2024-06-11 07:58:17.366662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid841385 ] 00:05:46.889 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.889 [2024-06-11 07:58:17.426759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.889 [2024-06-11 07:58:17.490447] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.889 [2024-06-11 07:58:17.490573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.828 07:58:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.828 07:58:18 -- common/autotest_common.sh@852 -- # return 0 00:05:47.828 07:58:18 -- event/cpu_locks.sh@49 -- # locks_exist 841385 00:05:47.828 07:58:18 -- event/cpu_locks.sh@22 -- # lslocks -p 841385 00:05:47.828 07:58:18 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.828 lslocks: write error 00:05:47.828 07:58:18 -- event/cpu_locks.sh@50 -- # killprocess 841385 00:05:47.828 07:58:18 -- common/autotest_common.sh@926 -- # '[' -z 841385 ']' 00:05:47.828 07:58:18 -- common/autotest_common.sh@930 -- # kill -0 841385 00:05:47.828 07:58:18 -- common/autotest_common.sh@931 -- # uname 00:05:47.828 07:58:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:47.828 07:58:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 841385 00:05:47.828 07:58:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:47.828 07:58:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:47.828 07:58:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 841385' 00:05:47.828 killing process with pid 841385 00:05:47.828 07:58:18 -- common/autotest_common.sh@945 -- # kill 841385 00:05:47.828 07:58:18 -- common/autotest_common.sh@950 -- # wait 841385 00:05:48.088 07:58:18 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 841385 00:05:48.088 07:58:18 -- common/autotest_common.sh@640 -- # local es=0 00:05:48.088 07:58:18 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 841385 00:05:48.088 07:58:18 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:48.088 07:58:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:48.088 07:58:18 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:48.088 07:58:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:48.088 07:58:18 -- common/autotest_common.sh@643 -- # waitforlisten 841385 00:05:48.088 07:58:18 -- common/autotest_common.sh@819 -- # '[' -z 841385 ']' 00:05:48.088 07:58:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.088 07:58:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:48.088 07:58:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.088 07:58:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:48.088 07:58:18 -- common/autotest_common.sh@10 -- # set +x 00:05:48.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (841385) - No such process 00:05:48.088 ERROR: process (pid: 841385) is no longer running 00:05:48.088 07:58:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:48.088 07:58:18 -- common/autotest_common.sh@852 -- # return 1 00:05:48.088 07:58:18 -- common/autotest_common.sh@643 -- # es=1 00:05:48.088 07:58:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:48.088 07:58:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:48.088 07:58:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:48.088 07:58:18 -- event/cpu_locks.sh@54 -- # no_locks 00:05:48.088 07:58:18 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:48.088 07:58:18 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:48.088 07:58:18 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:48.088 00:05:48.088 real 0m1.203s 00:05:48.088 user 0m1.266s 00:05:48.088 sys 0m0.384s 00:05:48.088 07:58:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.088 07:58:18 -- common/autotest_common.sh@10 -- # set +x 00:05:48.088 ************************************ 00:05:48.088 END TEST default_locks 00:05:48.088 ************************************ 00:05:48.088 07:58:18 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:48.088 07:58:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.088 07:58:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.088 07:58:18 -- common/autotest_common.sh@10 -- # set +x 00:05:48.088 ************************************ 00:05:48.088 START TEST default_locks_via_rpc 00:05:48.088 ************************************ 00:05:48.088 07:58:18 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:05:48.088 07:58:18 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=841692 00:05:48.088 07:58:18 -- event/cpu_locks.sh@63 -- # waitforlisten 841692 00:05:48.088 07:58:18 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.088 07:58:18 -- common/autotest_common.sh@819 -- # '[' -z 841692 ']' 00:05:48.088 07:58:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.088 07:58:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:48.088 07:58:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.088 07:58:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:48.088 07:58:18 -- common/autotest_common.sh@10 -- # set +x 00:05:48.088 [2024-06-11 07:58:18.615668] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:48.088 [2024-06-11 07:58:18.615729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid841692 ] 00:05:48.088 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.088 [2024-06-11 07:58:18.676552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.348 [2024-06-11 07:58:18.742618] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:48.348 [2024-06-11 07:58:18.742745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.918 07:58:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:48.918 07:58:19 -- common/autotest_common.sh@852 -- # return 0 00:05:48.918 07:58:19 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:48.918 07:58:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.918 07:58:19 -- common/autotest_common.sh@10 -- # set +x 00:05:48.918 07:58:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.918 07:58:19 -- event/cpu_locks.sh@67 -- # no_locks 00:05:48.918 07:58:19 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:48.918 07:58:19 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:48.918 07:58:19 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:48.918 07:58:19 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:48.918 07:58:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:48.918 07:58:19 -- common/autotest_common.sh@10 -- # set +x 00:05:48.918 07:58:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:48.918 07:58:19 -- event/cpu_locks.sh@71 -- # locks_exist 841692 00:05:48.918 07:58:19 -- event/cpu_locks.sh@22 -- # lslocks -p 841692 00:05:48.918 07:58:19 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.179 07:58:19 -- event/cpu_locks.sh@73 -- # killprocess 841692 00:05:49.179 07:58:19 -- common/autotest_common.sh@926 -- # '[' -z 841692 ']' 00:05:49.179 07:58:19 -- common/autotest_common.sh@930 -- # kill -0 841692 00:05:49.179 07:58:19 -- common/autotest_common.sh@931 -- # uname 00:05:49.179 07:58:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:49.179 07:58:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 841692 00:05:49.440 07:58:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:49.440 07:58:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:49.440 07:58:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 841692' 00:05:49.440 killing process with pid 841692 00:05:49.440 07:58:19 -- common/autotest_common.sh@945 -- # kill 841692 00:05:49.440 07:58:19 -- common/autotest_common.sh@950 -- # wait 841692 00:05:49.440 00:05:49.440 real 0m1.481s 00:05:49.440 user 0m1.570s 00:05:49.440 sys 0m0.493s 00:05:49.440 07:58:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.440 07:58:20 -- common/autotest_common.sh@10 -- # set +x 00:05:49.440 ************************************ 00:05:49.440 END TEST default_locks_via_rpc 00:05:49.440 ************************************ 00:05:49.440 07:58:20 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:49.440 07:58:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:49.440 07:58:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.440 07:58:20 -- common/autotest_common.sh@10 -- # set +x 00:05:49.701 ************************************ 00:05:49.701 START TEST non_locking_app_on_locked_coremask 00:05:49.701 ************************************ 00:05:49.701 07:58:20 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:05:49.701 07:58:20 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=841972 00:05:49.701 07:58:20 -- event/cpu_locks.sh@81 -- # waitforlisten 841972 /var/tmp/spdk.sock 00:05:49.702 07:58:20 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.702 07:58:20 -- common/autotest_common.sh@819 -- # '[' -z 841972 ']' 00:05:49.702 07:58:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.702 07:58:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:49.702 07:58:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.702 07:58:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:49.702 07:58:20 -- common/autotest_common.sh@10 -- # set +x 00:05:49.702 [2024-06-11 07:58:20.140817] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:49.702 [2024-06-11 07:58:20.140876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid841972 ] 00:05:49.702 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.702 [2024-06-11 07:58:20.201152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.702 [2024-06-11 07:58:20.267174] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:49.702 [2024-06-11 07:58:20.267303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.272 07:58:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:50.272 07:58:20 -- common/autotest_common.sh@852 -- # return 0 00:05:50.272 07:58:20 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=842134 00:05:50.272 07:58:20 -- event/cpu_locks.sh@85 -- # waitforlisten 842134 /var/tmp/spdk2.sock 00:05:50.272 07:58:20 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:50.272 07:58:20 -- common/autotest_common.sh@819 -- # '[' -z 842134 ']' 00:05:50.272 07:58:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.272 07:58:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:50.272 07:58:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.272 07:58:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:50.272 07:58:20 -- common/autotest_common.sh@10 -- # set +x 00:05:50.533 [2024-06-11 07:58:20.955360] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:50.533 [2024-06-11 07:58:20.955414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842134 ] 00:05:50.533 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.533 [2024-06-11 07:58:21.044858] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.533 [2024-06-11 07:58:21.044886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.533 [2024-06-11 07:58:21.171923] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:50.533 [2024-06-11 07:58:21.172055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.103 07:58:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:51.103 07:58:21 -- common/autotest_common.sh@852 -- # return 0 00:05:51.103 07:58:21 -- event/cpu_locks.sh@87 -- # locks_exist 841972 00:05:51.103 07:58:21 -- event/cpu_locks.sh@22 -- # lslocks -p 841972 00:05:51.103 07:58:21 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.673 lslocks: write error 00:05:51.673 07:58:22 -- event/cpu_locks.sh@89 -- # killprocess 841972 00:05:51.673 07:58:22 -- common/autotest_common.sh@926 -- # '[' -z 841972 ']' 00:05:51.673 07:58:22 -- common/autotest_common.sh@930 -- # kill -0 841972 00:05:51.673 07:58:22 -- common/autotest_common.sh@931 -- # uname 00:05:51.673 07:58:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:51.673 07:58:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 841972 00:05:51.933 07:58:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:51.933 07:58:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:51.933 07:58:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 841972' 00:05:51.933 killing process with pid 841972 00:05:51.933 07:58:22 -- common/autotest_common.sh@945 -- # kill 841972 00:05:51.933 07:58:22 -- common/autotest_common.sh@950 -- # wait 841972 00:05:52.193 07:58:22 -- event/cpu_locks.sh@90 -- # killprocess 842134 00:05:52.193 07:58:22 -- common/autotest_common.sh@926 -- # '[' -z 842134 ']' 00:05:52.193 07:58:22 -- common/autotest_common.sh@930 -- # kill -0 842134 00:05:52.193 07:58:22 -- common/autotest_common.sh@931 -- # uname 00:05:52.193 07:58:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:52.193 07:58:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 842134 00:05:52.193 07:58:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:52.193 07:58:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:52.193 07:58:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 842134' 00:05:52.193 killing process with pid 842134 00:05:52.193 07:58:22 -- common/autotest_common.sh@945 -- # kill 842134 00:05:52.193 07:58:22 -- common/autotest_common.sh@950 -- # wait 842134 00:05:52.453 00:05:52.453 real 0m2.925s 00:05:52.453 user 0m3.187s 00:05:52.453 sys 0m0.865s 00:05:52.453 07:58:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.453 07:58:23 -- common/autotest_common.sh@10 -- # set +x 00:05:52.453 ************************************ 00:05:52.453 END TEST non_locking_app_on_locked_coremask 00:05:52.453 ************************************ 00:05:52.453 07:58:23 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:52.453 07:58:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:52.453 07:58:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.453 07:58:23 -- common/autotest_common.sh@10 -- # set +x 00:05:52.453 ************************************ 00:05:52.453 START TEST locking_app_on_unlocked_coremask 00:05:52.453 ************************************ 00:05:52.453 07:58:23 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:05:52.453 07:58:23 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=842515 00:05:52.454 07:58:23 -- event/cpu_locks.sh@99 -- # waitforlisten 842515 /var/tmp/spdk.sock 00:05:52.454 07:58:23 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:52.454 07:58:23 -- common/autotest_common.sh@819 -- # '[' -z 842515 ']' 00:05:52.454 07:58:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.454 07:58:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:52.454 07:58:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.454 07:58:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:52.454 07:58:23 -- common/autotest_common.sh@10 -- # set +x 00:05:52.714 [2024-06-11 07:58:23.110399] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:52.714 [2024-06-11 07:58:23.110466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842515 ] 00:05:52.714 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.714 [2024-06-11 07:58:23.170424] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.714 [2024-06-11 07:58:23.170457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.714 [2024-06-11 07:58:23.234041] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:52.714 [2024-06-11 07:58:23.234168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.286 07:58:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:53.286 07:58:23 -- common/autotest_common.sh@852 -- # return 0 00:05:53.286 07:58:23 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=842839 00:05:53.286 07:58:23 -- event/cpu_locks.sh@103 -- # waitforlisten 842839 /var/tmp/spdk2.sock 00:05:53.286 07:58:23 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:53.286 07:58:23 -- common/autotest_common.sh@819 -- # '[' -z 842839 ']' 00:05:53.286 07:58:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.286 07:58:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:53.286 07:58:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.286 07:58:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:53.286 07:58:23 -- common/autotest_common.sh@10 -- # set +x 00:05:53.286 [2024-06-11 07:58:23.917089] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:53.286 [2024-06-11 07:58:23.917140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842839 ] 00:05:53.547 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.547 [2024-06-11 07:58:24.007147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.547 [2024-06-11 07:58:24.134456] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:53.547 [2024-06-11 07:58:24.134581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.118 07:58:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:54.118 07:58:24 -- common/autotest_common.sh@852 -- # return 0 00:05:54.118 07:58:24 -- event/cpu_locks.sh@105 -- # locks_exist 842839 00:05:54.118 07:58:24 -- event/cpu_locks.sh@22 -- # lslocks -p 842839 00:05:54.118 07:58:24 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.689 lslocks: write error 00:05:54.689 07:58:25 -- event/cpu_locks.sh@107 -- # killprocess 842515 00:05:54.689 07:58:25 -- common/autotest_common.sh@926 -- # '[' -z 842515 ']' 00:05:54.689 07:58:25 -- common/autotest_common.sh@930 -- # kill -0 842515 00:05:54.689 07:58:25 -- common/autotest_common.sh@931 -- # uname 00:05:54.689 07:58:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:54.689 07:58:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 842515 00:05:54.689 07:58:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:54.689 07:58:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:54.689 07:58:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 842515' 00:05:54.689 killing process with pid 842515 00:05:54.689 07:58:25 -- common/autotest_common.sh@945 -- # kill 842515 00:05:54.689 07:58:25 -- common/autotest_common.sh@950 -- # wait 842515 00:05:55.260 07:58:25 -- event/cpu_locks.sh@108 -- # killprocess 842839 00:05:55.260 07:58:25 -- common/autotest_common.sh@926 -- # '[' -z 842839 ']' 00:05:55.260 07:58:25 -- common/autotest_common.sh@930 -- # kill -0 842839 00:05:55.260 07:58:25 -- common/autotest_common.sh@931 -- # uname 00:05:55.260 07:58:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:55.261 07:58:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 842839 00:05:55.261 07:58:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:55.261 07:58:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:55.261 07:58:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 842839' 00:05:55.261 killing process with pid 842839 00:05:55.261 07:58:25 -- common/autotest_common.sh@945 -- # kill 842839 00:05:55.261 07:58:25 -- common/autotest_common.sh@950 -- # wait 842839 00:05:55.521 00:05:55.521 real 0m2.915s 00:05:55.521 user 0m3.165s 00:05:55.521 sys 0m0.873s 00:05:55.521 07:58:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.521 07:58:25 -- common/autotest_common.sh@10 -- # set +x 00:05:55.521 ************************************ 00:05:55.521 END TEST locking_app_on_unlocked_coremask 00:05:55.521 ************************************ 00:05:55.521 07:58:26 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:55.521 07:58:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.521 07:58:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.521 07:58:26 -- common/autotest_common.sh@10 -- # set +x 00:05:55.521 ************************************ 00:05:55.521 START TEST locking_app_on_locked_coremask 00:05:55.521 ************************************ 00:05:55.521 07:58:26 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:05:55.521 07:58:26 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=843225 00:05:55.521 07:58:26 -- event/cpu_locks.sh@116 -- # waitforlisten 843225 /var/tmp/spdk.sock 00:05:55.521 07:58:26 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.521 07:58:26 -- common/autotest_common.sh@819 -- # '[' -z 843225 ']' 00:05:55.521 07:58:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.521 07:58:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:55.521 07:58:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.521 07:58:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:55.521 07:58:26 -- common/autotest_common.sh@10 -- # set +x 00:05:55.521 [2024-06-11 07:58:26.068802] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:55.521 [2024-06-11 07:58:26.068860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid843225 ] 00:05:55.521 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.521 [2024-06-11 07:58:26.128944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.781 [2024-06-11 07:58:26.193544] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.781 [2024-06-11 07:58:26.193665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.352 07:58:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:56.352 07:58:26 -- common/autotest_common.sh@852 -- # return 0 00:05:56.352 07:58:26 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=843426 00:05:56.352 07:58:26 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 843426 /var/tmp/spdk2.sock 00:05:56.352 07:58:26 -- common/autotest_common.sh@640 -- # local es=0 00:05:56.352 07:58:26 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:56.352 07:58:26 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 843426 /var/tmp/spdk2.sock 00:05:56.352 07:58:26 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:56.352 07:58:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:56.352 07:58:26 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:56.352 07:58:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:56.352 07:58:26 -- common/autotest_common.sh@643 -- # waitforlisten 843426 /var/tmp/spdk2.sock 00:05:56.352 07:58:26 -- common/autotest_common.sh@819 -- # '[' -z 843426 ']' 00:05:56.352 07:58:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.352 07:58:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:56.352 07:58:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.352 07:58:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:56.352 07:58:26 -- common/autotest_common.sh@10 -- # set +x 00:05:56.352 [2024-06-11 07:58:26.866918] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:56.352 [2024-06-11 07:58:26.866968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid843426 ] 00:05:56.352 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.352 [2024-06-11 07:58:26.957226] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 843225 has claimed it. 00:05:56.352 [2024-06-11 07:58:26.957264] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:56.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (843426) - No such process 00:05:56.922 ERROR: process (pid: 843426) is no longer running 00:05:56.922 07:58:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:56.922 07:58:27 -- common/autotest_common.sh@852 -- # return 1 00:05:56.922 07:58:27 -- common/autotest_common.sh@643 -- # es=1 00:05:56.922 07:58:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:56.922 07:58:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:56.922 07:58:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:56.922 07:58:27 -- event/cpu_locks.sh@122 -- # locks_exist 843225 00:05:56.922 07:58:27 -- event/cpu_locks.sh@22 -- # lslocks -p 843225 00:05:56.922 07:58:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.493 lslocks: write error 00:05:57.493 07:58:27 -- event/cpu_locks.sh@124 -- # killprocess 843225 00:05:57.493 07:58:27 -- common/autotest_common.sh@926 -- # '[' -z 843225 ']' 00:05:57.493 07:58:27 -- common/autotest_common.sh@930 -- # kill -0 843225 00:05:57.493 07:58:27 -- common/autotest_common.sh@931 -- # uname 00:05:57.493 07:58:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:57.493 07:58:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 843225 00:05:57.493 07:58:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:57.493 07:58:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:57.493 07:58:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 843225' 00:05:57.493 killing process with pid 843225 00:05:57.493 07:58:27 -- common/autotest_common.sh@945 -- # kill 843225 00:05:57.493 07:58:27 -- common/autotest_common.sh@950 -- # wait 843225 00:05:57.493 00:05:57.493 real 0m2.109s 00:05:57.493 user 0m2.339s 00:05:57.493 sys 0m0.581s 00:05:57.493 07:58:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.493 07:58:28 -- common/autotest_common.sh@10 -- # set +x 00:05:57.493 ************************************ 00:05:57.493 END TEST locking_app_on_locked_coremask 00:05:57.493 ************************************ 00:05:57.753 07:58:28 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:57.753 07:58:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:57.753 07:58:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.753 07:58:28 -- common/autotest_common.sh@10 -- # set +x 00:05:57.753 ************************************ 00:05:57.753 START TEST locking_overlapped_coremask 00:05:57.753 ************************************ 00:05:57.753 07:58:28 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:05:57.753 07:58:28 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=843633 00:05:57.753 07:58:28 -- event/cpu_locks.sh@133 -- # waitforlisten 843633 /var/tmp/spdk.sock 00:05:57.754 07:58:28 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:57.754 07:58:28 -- common/autotest_common.sh@819 -- # '[' -z 843633 ']' 00:05:57.754 07:58:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.754 07:58:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:57.754 07:58:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.754 07:58:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:57.754 07:58:28 -- common/autotest_common.sh@10 -- # set +x 00:05:57.754 [2024-06-11 07:58:28.225318] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:57.754 [2024-06-11 07:58:28.225377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid843633 ] 00:05:57.754 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.754 [2024-06-11 07:58:28.285838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.754 [2024-06-11 07:58:28.351768] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:57.754 [2024-06-11 07:58:28.352051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.754 [2024-06-11 07:58:28.352192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.754 [2024-06-11 07:58:28.352195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.695 07:58:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:58.695 07:58:28 -- common/autotest_common.sh@852 -- # return 0 00:05:58.695 07:58:28 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=843939 00:05:58.695 07:58:28 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 843939 /var/tmp/spdk2.sock 00:05:58.695 07:58:28 -- common/autotest_common.sh@640 -- # local es=0 00:05:58.695 07:58:28 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:58.695 07:58:28 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 843939 /var/tmp/spdk2.sock 00:05:58.695 07:58:28 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:58.695 07:58:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:58.695 07:58:28 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:58.695 07:58:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:58.695 07:58:28 -- common/autotest_common.sh@643 -- # waitforlisten 843939 /var/tmp/spdk2.sock 00:05:58.695 07:58:28 -- common/autotest_common.sh@819 -- # '[' -z 843939 ']' 00:05:58.695 07:58:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.695 07:58:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:58.695 07:58:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.695 07:58:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:58.695 07:58:28 -- common/autotest_common.sh@10 -- # set +x 00:05:58.695 [2024-06-11 07:58:29.025938] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:58.695 [2024-06-11 07:58:29.025986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid843939 ] 00:05:58.695 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.695 [2024-06-11 07:58:29.096169] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 843633 has claimed it. 00:05:58.695 [2024-06-11 07:58:29.096196] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:59.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (843939) - No such process 00:05:59.266 ERROR: process (pid: 843939) is no longer running 00:05:59.266 07:58:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:59.266 07:58:29 -- common/autotest_common.sh@852 -- # return 1 00:05:59.266 07:58:29 -- common/autotest_common.sh@643 -- # es=1 00:05:59.266 07:58:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:59.266 07:58:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:59.266 07:58:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:59.266 07:58:29 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:59.266 07:58:29 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:59.266 07:58:29 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:59.266 07:58:29 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:59.266 07:58:29 -- event/cpu_locks.sh@141 -- # killprocess 843633 00:05:59.266 07:58:29 -- common/autotest_common.sh@926 -- # '[' -z 843633 ']' 00:05:59.266 07:58:29 -- common/autotest_common.sh@930 -- # kill -0 843633 00:05:59.266 07:58:29 -- common/autotest_common.sh@931 -- # uname 00:05:59.266 07:58:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:59.266 07:58:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 843633 00:05:59.266 07:58:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:59.266 07:58:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:59.266 07:58:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 843633' 00:05:59.266 killing process with pid 843633 00:05:59.266 07:58:29 -- common/autotest_common.sh@945 -- # kill 843633 00:05:59.266 07:58:29 -- common/autotest_common.sh@950 -- # wait 843633 00:05:59.266 00:05:59.266 real 0m1.719s 00:05:59.266 user 0m4.860s 00:05:59.266 sys 0m0.361s 00:05:59.266 07:58:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.266 07:58:29 -- common/autotest_common.sh@10 -- # set +x 00:05:59.266 ************************************ 00:05:59.266 END TEST locking_overlapped_coremask 00:05:59.266 ************************************ 00:05:59.528 07:58:29 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:59.528 07:58:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:59.528 07:58:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.528 07:58:29 -- common/autotest_common.sh@10 -- # set +x 00:05:59.528 ************************************ 00:05:59.528 START TEST locking_overlapped_coremask_via_rpc 00:05:59.528 ************************************ 00:05:59.528 07:58:29 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:05:59.528 07:58:29 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=844047 00:05:59.528 07:58:29 -- event/cpu_locks.sh@149 -- # waitforlisten 844047 /var/tmp/spdk.sock 00:05:59.528 07:58:29 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:59.528 07:58:29 -- common/autotest_common.sh@819 -- # '[' -z 844047 ']' 00:05:59.528 07:58:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.528 07:58:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:59.528 07:58:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.528 07:58:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:59.528 07:58:29 -- common/autotest_common.sh@10 -- # set +x 00:05:59.528 [2024-06-11 07:58:29.989813] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:59.528 [2024-06-11 07:58:29.989873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid844047 ] 00:05:59.528 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.528 [2024-06-11 07:58:30.054672] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.528 [2024-06-11 07:58:30.054710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.528 [2024-06-11 07:58:30.122770] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:59.528 [2024-06-11 07:58:30.123024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.528 [2024-06-11 07:58:30.123140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.528 [2024-06-11 07:58:30.123143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.468 07:58:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:00.468 07:58:30 -- common/autotest_common.sh@852 -- # return 0 00:06:00.468 07:58:30 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:00.469 07:58:30 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=844310 00:06:00.469 07:58:30 -- event/cpu_locks.sh@153 -- # waitforlisten 844310 /var/tmp/spdk2.sock 00:06:00.469 07:58:30 -- common/autotest_common.sh@819 -- # '[' -z 844310 ']' 00:06:00.469 07:58:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.469 07:58:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:00.469 07:58:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.469 07:58:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:00.469 07:58:30 -- common/autotest_common.sh@10 -- # set +x 00:06:00.469 [2024-06-11 07:58:30.793996] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:00.469 [2024-06-11 07:58:30.794047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid844310 ] 00:06:00.469 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.469 [2024-06-11 07:58:30.867373] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.469 [2024-06-11 07:58:30.867395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.469 [2024-06-11 07:58:30.970724] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:00.469 [2024-06-11 07:58:30.970955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.469 [2024-06-11 07:58:30.971112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.469 [2024-06-11 07:58:30.971114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:01.039 07:58:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:01.039 07:58:31 -- common/autotest_common.sh@852 -- # return 0 00:06:01.039 07:58:31 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:01.039 07:58:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.039 07:58:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.039 07:58:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.039 07:58:31 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.039 07:58:31 -- common/autotest_common.sh@640 -- # local es=0 00:06:01.039 07:58:31 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.039 07:58:31 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:01.039 07:58:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:01.039 07:58:31 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:01.039 07:58:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:01.039 07:58:31 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.039 07:58:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.039 07:58:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.039 [2024-06-11 07:58:31.567498] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 844047 has claimed it. 00:06:01.039 request: 00:06:01.039 { 00:06:01.039 "method": "framework_enable_cpumask_locks", 00:06:01.039 "req_id": 1 00:06:01.039 } 00:06:01.039 Got JSON-RPC error response 00:06:01.040 response: 00:06:01.040 { 00:06:01.040 "code": -32603, 00:06:01.040 "message": "Failed to claim CPU core: 2" 00:06:01.040 } 00:06:01.040 07:58:31 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:01.040 07:58:31 -- common/autotest_common.sh@643 -- # es=1 00:06:01.040 07:58:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:01.040 07:58:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:01.040 07:58:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:01.040 07:58:31 -- event/cpu_locks.sh@158 -- # waitforlisten 844047 /var/tmp/spdk.sock 00:06:01.040 07:58:31 -- common/autotest_common.sh@819 -- # '[' -z 844047 ']' 00:06:01.040 07:58:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.040 07:58:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:01.040 07:58:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.040 07:58:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:01.040 07:58:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.300 07:58:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:01.300 07:58:31 -- common/autotest_common.sh@852 -- # return 0 00:06:01.300 07:58:31 -- event/cpu_locks.sh@159 -- # waitforlisten 844310 /var/tmp/spdk2.sock 00:06:01.300 07:58:31 -- common/autotest_common.sh@819 -- # '[' -z 844310 ']' 00:06:01.300 07:58:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.300 07:58:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:01.300 07:58:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.300 07:58:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:01.300 07:58:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.300 07:58:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:01.300 07:58:31 -- common/autotest_common.sh@852 -- # return 0 00:06:01.300 07:58:31 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:01.300 07:58:31 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.300 07:58:31 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.300 07:58:31 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.300 00:06:01.300 real 0m1.963s 00:06:01.300 user 0m0.750s 00:06:01.300 sys 0m0.147s 00:06:01.300 07:58:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.301 07:58:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.301 ************************************ 00:06:01.301 END TEST locking_overlapped_coremask_via_rpc 00:06:01.301 ************************************ 00:06:01.301 07:58:31 -- event/cpu_locks.sh@174 -- # cleanup 00:06:01.301 07:58:31 -- event/cpu_locks.sh@15 -- # [[ -z 844047 ]] 00:06:01.301 07:58:31 -- event/cpu_locks.sh@15 -- # killprocess 844047 00:06:01.301 07:58:31 -- common/autotest_common.sh@926 -- # '[' -z 844047 ']' 00:06:01.301 07:58:31 -- common/autotest_common.sh@930 -- # kill -0 844047 00:06:01.301 07:58:31 -- common/autotest_common.sh@931 -- # uname 00:06:01.301 07:58:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:01.301 07:58:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 844047 00:06:01.561 07:58:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:01.561 07:58:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:01.561 07:58:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 844047' 00:06:01.561 killing process with pid 844047 00:06:01.561 07:58:31 -- common/autotest_common.sh@945 -- # kill 844047 00:06:01.561 07:58:31 -- common/autotest_common.sh@950 -- # wait 844047 00:06:01.821 07:58:32 -- event/cpu_locks.sh@16 -- # [[ -z 844310 ]] 00:06:01.821 07:58:32 -- event/cpu_locks.sh@16 -- # killprocess 844310 00:06:01.821 07:58:32 -- common/autotest_common.sh@926 -- # '[' -z 844310 ']' 00:06:01.821 07:58:32 -- common/autotest_common.sh@930 -- # kill -0 844310 00:06:01.821 07:58:32 -- common/autotest_common.sh@931 -- # uname 00:06:01.821 07:58:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:01.821 07:58:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 844310 00:06:01.821 07:58:32 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:01.821 07:58:32 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:01.821 07:58:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 844310' 00:06:01.821 killing process with pid 844310 00:06:01.821 07:58:32 -- common/autotest_common.sh@945 -- # kill 844310 00:06:01.821 07:58:32 -- common/autotest_common.sh@950 -- # wait 844310 00:06:01.821 07:58:32 -- event/cpu_locks.sh@18 -- # rm -f 00:06:01.821 07:58:32 -- event/cpu_locks.sh@1 -- # cleanup 00:06:01.821 07:58:32 -- event/cpu_locks.sh@15 -- # [[ -z 844047 ]] 00:06:01.821 07:58:32 -- event/cpu_locks.sh@15 -- # killprocess 844047 00:06:01.821 07:58:32 -- common/autotest_common.sh@926 -- # '[' -z 844047 ']' 00:06:01.821 07:58:32 -- common/autotest_common.sh@930 -- # kill -0 844047 00:06:01.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (844047) - No such process 00:06:01.821 07:58:32 -- common/autotest_common.sh@953 -- # echo 'Process with pid 844047 is not found' 00:06:01.821 Process with pid 844047 is not found 00:06:01.821 07:58:32 -- event/cpu_locks.sh@16 -- # [[ -z 844310 ]] 00:06:01.821 07:58:32 -- event/cpu_locks.sh@16 -- # killprocess 844310 00:06:01.821 07:58:32 -- common/autotest_common.sh@926 -- # '[' -z 844310 ']' 00:06:01.821 07:58:32 -- common/autotest_common.sh@930 -- # kill -0 844310 00:06:01.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (844310) - No such process 00:06:01.821 07:58:32 -- common/autotest_common.sh@953 -- # echo 'Process with pid 844310 is not found' 00:06:01.821 Process with pid 844310 is not found 00:06:01.821 07:58:32 -- event/cpu_locks.sh@18 -- # rm -f 00:06:01.821 00:06:01.821 real 0m15.243s 00:06:01.821 user 0m26.501s 00:06:01.821 sys 0m4.428s 00:06:01.821 07:58:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.821 07:58:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.821 ************************************ 00:06:01.821 END TEST cpu_locks 00:06:01.821 ************************************ 00:06:02.081 00:06:02.081 real 0m40.952s 00:06:02.081 user 1m20.924s 00:06:02.081 sys 0m7.255s 00:06:02.081 07:58:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.081 07:58:32 -- common/autotest_common.sh@10 -- # set +x 00:06:02.082 ************************************ 00:06:02.082 END TEST event 00:06:02.082 ************************************ 00:06:02.082 07:58:32 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:02.082 07:58:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:02.082 07:58:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.082 07:58:32 -- common/autotest_common.sh@10 -- # set +x 00:06:02.082 ************************************ 00:06:02.082 START TEST thread 00:06:02.082 ************************************ 00:06:02.082 07:58:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:02.082 * Looking for test storage... 00:06:02.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:02.082 07:58:32 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.082 07:58:32 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:02.082 07:58:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.082 07:58:32 -- common/autotest_common.sh@10 -- # set +x 00:06:02.082 ************************************ 00:06:02.082 START TEST thread_poller_perf 00:06:02.082 ************************************ 00:06:02.082 07:58:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.082 [2024-06-11 07:58:32.662370] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:02.082 [2024-06-11 07:58:32.662488] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid844747 ] 00:06:02.082 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.082 [2024-06-11 07:58:32.727691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.342 [2024-06-11 07:58:32.792910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.342 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:03.282 ====================================== 00:06:03.283 busy:2415026914 (cyc) 00:06:03.283 total_run_count: 275000 00:06:03.283 tsc_hz: 2400000000 (cyc) 00:06:03.283 ====================================== 00:06:03.283 poller_cost: 8781 (cyc), 3658 (nsec) 00:06:03.283 00:06:03.283 real 0m1.214s 00:06:03.283 user 0m1.139s 00:06:03.283 sys 0m0.069s 00:06:03.283 07:58:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.283 07:58:33 -- common/autotest_common.sh@10 -- # set +x 00:06:03.283 ************************************ 00:06:03.283 END TEST thread_poller_perf 00:06:03.283 ************************************ 00:06:03.283 07:58:33 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:03.283 07:58:33 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:03.283 07:58:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.283 07:58:33 -- common/autotest_common.sh@10 -- # set +x 00:06:03.283 ************************************ 00:06:03.283 START TEST thread_poller_perf 00:06:03.283 ************************************ 00:06:03.283 07:58:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:03.283 [2024-06-11 07:58:33.917707] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:03.283 [2024-06-11 07:58:33.917819] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid845097 ] 00:06:03.543 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.543 [2024-06-11 07:58:33.981989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.543 [2024-06-11 07:58:34.045476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.543 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:04.484 ====================================== 00:06:04.484 busy:2402493142 (cyc) 00:06:04.484 total_run_count: 3804000 00:06:04.484 tsc_hz: 2400000000 (cyc) 00:06:04.484 ====================================== 00:06:04.484 poller_cost: 631 (cyc), 262 (nsec) 00:06:04.484 00:06:04.484 real 0m1.201s 00:06:04.484 user 0m1.130s 00:06:04.484 sys 0m0.067s 00:06:04.484 07:58:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.484 07:58:35 -- common/autotest_common.sh@10 -- # set +x 00:06:04.484 ************************************ 00:06:04.484 END TEST thread_poller_perf 00:06:04.484 ************************************ 00:06:04.745 07:58:35 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:04.745 00:06:04.745 real 0m2.594s 00:06:04.745 user 0m2.342s 00:06:04.745 sys 0m0.264s 00:06:04.745 07:58:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.745 07:58:35 -- common/autotest_common.sh@10 -- # set +x 00:06:04.745 ************************************ 00:06:04.745 END TEST thread 00:06:04.745 ************************************ 00:06:04.745 07:58:35 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:04.745 07:58:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:04.745 07:58:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.745 07:58:35 -- common/autotest_common.sh@10 -- # set +x 00:06:04.745 ************************************ 00:06:04.745 START TEST accel 00:06:04.745 ************************************ 00:06:04.745 07:58:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:04.745 * Looking for test storage... 00:06:04.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:04.745 07:58:35 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:04.745 07:58:35 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:04.745 07:58:35 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:04.745 07:58:35 -- accel/accel.sh@59 -- # spdk_tgt_pid=845395 00:06:04.745 07:58:35 -- accel/accel.sh@60 -- # waitforlisten 845395 00:06:04.745 07:58:35 -- common/autotest_common.sh@819 -- # '[' -z 845395 ']' 00:06:04.745 07:58:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.745 07:58:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:04.745 07:58:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.745 07:58:35 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:04.745 07:58:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:04.745 07:58:35 -- accel/accel.sh@58 -- # build_accel_config 00:06:04.745 07:58:35 -- common/autotest_common.sh@10 -- # set +x 00:06:04.745 07:58:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.745 07:58:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.745 07:58:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.745 07:58:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.745 07:58:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.745 07:58:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.745 07:58:35 -- accel/accel.sh@42 -- # jq -r . 00:06:04.745 [2024-06-11 07:58:35.330048] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:04.745 [2024-06-11 07:58:35.330123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid845395 ] 00:06:04.745 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.005 [2024-06-11 07:58:35.394534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.005 [2024-06-11 07:58:35.467297] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:05.005 [2024-06-11 07:58:35.467427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.575 07:58:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:05.575 07:58:36 -- common/autotest_common.sh@852 -- # return 0 00:06:05.575 07:58:36 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:05.575 07:58:36 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:05.575 07:58:36 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:05.575 07:58:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.575 07:58:36 -- common/autotest_common.sh@10 -- # set +x 00:06:05.575 07:58:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.575 07:58:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # IFS== 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:05.575 07:58:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:05.575 07:58:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # IFS== 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:05.575 07:58:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:05.575 07:58:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # IFS== 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:05.575 07:58:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:05.575 07:58:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # IFS== 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:05.575 07:58:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:05.575 07:58:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # IFS== 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:05.575 07:58:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:05.575 07:58:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # IFS== 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:05.575 07:58:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:05.575 07:58:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # IFS== 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:05.575 07:58:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:05.575 07:58:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # IFS== 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:05.575 07:58:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:05.575 07:58:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # IFS== 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:05.575 07:58:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:05.575 07:58:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # IFS== 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:05.575 07:58:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:05.575 07:58:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # IFS== 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:05.575 07:58:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:05.575 07:58:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # IFS== 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:05.575 07:58:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:05.575 07:58:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # IFS== 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:05.575 07:58:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:05.575 07:58:36 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # IFS== 00:06:05.575 07:58:36 -- accel/accel.sh@64 -- # read -r opc module 00:06:05.575 07:58:36 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:05.575 07:58:36 -- accel/accel.sh@67 -- # killprocess 845395 00:06:05.575 07:58:36 -- common/autotest_common.sh@926 -- # '[' -z 845395 ']' 00:06:05.575 07:58:36 -- common/autotest_common.sh@930 -- # kill -0 845395 00:06:05.575 07:58:36 -- common/autotest_common.sh@931 -- # uname 00:06:05.575 07:58:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:05.575 07:58:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 845395 00:06:05.575 07:58:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:05.575 07:58:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:05.575 07:58:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 845395' 00:06:05.575 killing process with pid 845395 00:06:05.575 07:58:36 -- common/autotest_common.sh@945 -- # kill 845395 00:06:05.575 07:58:36 -- common/autotest_common.sh@950 -- # wait 845395 00:06:05.836 07:58:36 -- accel/accel.sh@68 -- # trap - ERR 00:06:05.836 07:58:36 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:05.836 07:58:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:05.836 07:58:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.836 07:58:36 -- common/autotest_common.sh@10 -- # set +x 00:06:05.836 07:58:36 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:05.836 07:58:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:05.836 07:58:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.836 07:58:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.836 07:58:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.836 07:58:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.836 07:58:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.836 07:58:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.836 07:58:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.836 07:58:36 -- accel/accel.sh@42 -- # jq -r . 00:06:05.836 07:58:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.836 07:58:36 -- common/autotest_common.sh@10 -- # set +x 00:06:05.836 07:58:36 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:05.836 07:58:36 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:05.836 07:58:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.836 07:58:36 -- common/autotest_common.sh@10 -- # set +x 00:06:05.836 ************************************ 00:06:05.836 START TEST accel_missing_filename 00:06:05.836 ************************************ 00:06:05.836 07:58:36 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:05.836 07:58:36 -- common/autotest_common.sh@640 -- # local es=0 00:06:05.836 07:58:36 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:05.836 07:58:36 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:05.836 07:58:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:05.836 07:58:36 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:05.836 07:58:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:05.836 07:58:36 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:05.836 07:58:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:05.836 07:58:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.836 07:58:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.836 07:58:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.836 07:58:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.836 07:58:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.836 07:58:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.836 07:58:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.836 07:58:36 -- accel/accel.sh@42 -- # jq -r . 00:06:06.096 [2024-06-11 07:58:36.505504] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:06.096 [2024-06-11 07:58:36.505587] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid845542 ] 00:06:06.096 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.096 [2024-06-11 07:58:36.568657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.096 [2024-06-11 07:58:36.632740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.096 [2024-06-11 07:58:36.664489] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:06.096 [2024-06-11 07:58:36.701387] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:06.357 A filename is required. 00:06:06.357 07:58:36 -- common/autotest_common.sh@643 -- # es=234 00:06:06.357 07:58:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:06.357 07:58:36 -- common/autotest_common.sh@652 -- # es=106 00:06:06.357 07:58:36 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:06.357 07:58:36 -- common/autotest_common.sh@660 -- # es=1 00:06:06.357 07:58:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:06.357 00:06:06.357 real 0m0.279s 00:06:06.357 user 0m0.215s 00:06:06.357 sys 0m0.103s 00:06:06.358 07:58:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.358 07:58:36 -- common/autotest_common.sh@10 -- # set +x 00:06:06.358 ************************************ 00:06:06.358 END TEST accel_missing_filename 00:06:06.358 ************************************ 00:06:06.358 07:58:36 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:06.358 07:58:36 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:06.358 07:58:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.358 07:58:36 -- common/autotest_common.sh@10 -- # set +x 00:06:06.358 ************************************ 00:06:06.358 START TEST accel_compress_verify 00:06:06.358 ************************************ 00:06:06.358 07:58:36 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:06.358 07:58:36 -- common/autotest_common.sh@640 -- # local es=0 00:06:06.358 07:58:36 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:06.358 07:58:36 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:06.358 07:58:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:06.358 07:58:36 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:06.358 07:58:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:06.358 07:58:36 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:06.358 07:58:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:06.358 07:58:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.358 07:58:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.358 07:58:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.358 07:58:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.358 07:58:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.358 07:58:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.358 07:58:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.358 07:58:36 -- accel/accel.sh@42 -- # jq -r . 00:06:06.358 [2024-06-11 07:58:36.827989] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:06.358 [2024-06-11 07:58:36.828089] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid845684 ] 00:06:06.358 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.358 [2024-06-11 07:58:36.891369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.358 [2024-06-11 07:58:36.956041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.358 [2024-06-11 07:58:36.987826] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:06.619 [2024-06-11 07:58:37.024589] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:06.619 00:06:06.619 Compression does not support the verify option, aborting. 00:06:06.619 07:58:37 -- common/autotest_common.sh@643 -- # es=161 00:06:06.619 07:58:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:06.619 07:58:37 -- common/autotest_common.sh@652 -- # es=33 00:06:06.619 07:58:37 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:06.619 07:58:37 -- common/autotest_common.sh@660 -- # es=1 00:06:06.619 07:58:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:06.619 00:06:06.619 real 0m0.280s 00:06:06.619 user 0m0.217s 00:06:06.619 sys 0m0.105s 00:06:06.619 07:58:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.619 07:58:37 -- common/autotest_common.sh@10 -- # set +x 00:06:06.619 ************************************ 00:06:06.619 END TEST accel_compress_verify 00:06:06.619 ************************************ 00:06:06.619 07:58:37 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:06.619 07:58:37 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:06.619 07:58:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.619 07:58:37 -- common/autotest_common.sh@10 -- # set +x 00:06:06.619 ************************************ 00:06:06.619 START TEST accel_wrong_workload 00:06:06.619 ************************************ 00:06:06.619 07:58:37 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:06.619 07:58:37 -- common/autotest_common.sh@640 -- # local es=0 00:06:06.619 07:58:37 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:06.619 07:58:37 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:06.619 07:58:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:06.619 07:58:37 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:06.619 07:58:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:06.619 07:58:37 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:06.619 07:58:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:06.619 07:58:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.619 07:58:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.619 07:58:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.619 07:58:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.619 07:58:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.619 07:58:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.619 07:58:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.619 07:58:37 -- accel/accel.sh@42 -- # jq -r . 00:06:06.619 Unsupported workload type: foobar 00:06:06.619 [2024-06-11 07:58:37.149812] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:06.619 accel_perf options: 00:06:06.619 [-h help message] 00:06:06.619 [-q queue depth per core] 00:06:06.619 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:06.619 [-T number of threads per core 00:06:06.619 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:06.619 [-t time in seconds] 00:06:06.619 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:06.619 [ dif_verify, , dif_generate, dif_generate_copy 00:06:06.619 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:06.619 [-l for compress/decompress workloads, name of uncompressed input file 00:06:06.619 [-S for crc32c workload, use this seed value (default 0) 00:06:06.619 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:06.619 [-f for fill workload, use this BYTE value (default 255) 00:06:06.619 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:06.619 [-y verify result if this switch is on] 00:06:06.619 [-a tasks to allocate per core (default: same value as -q)] 00:06:06.619 Can be used to spread operations across a wider range of memory. 00:06:06.619 07:58:37 -- common/autotest_common.sh@643 -- # es=1 00:06:06.619 07:58:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:06.619 07:58:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:06.619 07:58:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:06.619 00:06:06.619 real 0m0.038s 00:06:06.619 user 0m0.024s 00:06:06.619 sys 0m0.014s 00:06:06.619 07:58:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.619 07:58:37 -- common/autotest_common.sh@10 -- # set +x 00:06:06.619 ************************************ 00:06:06.619 END TEST accel_wrong_workload 00:06:06.619 ************************************ 00:06:06.619 Error: writing output failed: Broken pipe 00:06:06.619 07:58:37 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:06.619 07:58:37 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:06.619 07:58:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.619 07:58:37 -- common/autotest_common.sh@10 -- # set +x 00:06:06.619 ************************************ 00:06:06.619 START TEST accel_negative_buffers 00:06:06.619 ************************************ 00:06:06.619 07:58:37 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:06.619 07:58:37 -- common/autotest_common.sh@640 -- # local es=0 00:06:06.619 07:58:37 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:06.619 07:58:37 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:06.619 07:58:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:06.619 07:58:37 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:06.619 07:58:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:06.619 07:58:37 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:06.619 07:58:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:06.619 07:58:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.619 07:58:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.619 07:58:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.619 07:58:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.619 07:58:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.619 07:58:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.619 07:58:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.619 07:58:37 -- accel/accel.sh@42 -- # jq -r . 00:06:06.619 -x option must be non-negative. 00:06:06.619 [2024-06-11 07:58:37.227804] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:06.619 accel_perf options: 00:06:06.619 [-h help message] 00:06:06.619 [-q queue depth per core] 00:06:06.619 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:06.619 [-T number of threads per core 00:06:06.619 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:06.619 [-t time in seconds] 00:06:06.619 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:06.619 [ dif_verify, , dif_generate, dif_generate_copy 00:06:06.619 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:06.619 [-l for compress/decompress workloads, name of uncompressed input file 00:06:06.619 [-S for crc32c workload, use this seed value (default 0) 00:06:06.619 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:06.619 [-f for fill workload, use this BYTE value (default 255) 00:06:06.619 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:06.619 [-y verify result if this switch is on] 00:06:06.619 [-a tasks to allocate per core (default: same value as -q)] 00:06:06.619 Can be used to spread operations across a wider range of memory. 00:06:06.619 07:58:37 -- common/autotest_common.sh@643 -- # es=1 00:06:06.619 07:58:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:06.619 07:58:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:06.619 07:58:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:06.619 00:06:06.619 real 0m0.036s 00:06:06.619 user 0m0.026s 00:06:06.619 sys 0m0.009s 00:06:06.619 07:58:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.619 07:58:37 -- common/autotest_common.sh@10 -- # set +x 00:06:06.619 ************************************ 00:06:06.619 END TEST accel_negative_buffers 00:06:06.619 ************************************ 00:06:06.619 Error: writing output failed: Broken pipe 00:06:06.879 07:58:37 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:06.879 07:58:37 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:06.879 07:58:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.879 07:58:37 -- common/autotest_common.sh@10 -- # set +x 00:06:06.879 ************************************ 00:06:06.879 START TEST accel_crc32c 00:06:06.880 ************************************ 00:06:06.880 07:58:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:06.880 07:58:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.880 07:58:37 -- accel/accel.sh@17 -- # local accel_module 00:06:06.880 07:58:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:06.880 07:58:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:06.880 07:58:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.880 07:58:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.880 07:58:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.880 07:58:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.880 07:58:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.880 07:58:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.880 07:58:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.880 07:58:37 -- accel/accel.sh@42 -- # jq -r . 00:06:06.880 [2024-06-11 07:58:37.303212] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:06.880 [2024-06-11 07:58:37.303277] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid845944 ] 00:06:06.880 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.880 [2024-06-11 07:58:37.364284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.880 [2024-06-11 07:58:37.426270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.262 07:58:38 -- accel/accel.sh@18 -- # out=' 00:06:08.262 SPDK Configuration: 00:06:08.262 Core mask: 0x1 00:06:08.262 00:06:08.262 Accel Perf Configuration: 00:06:08.262 Workload Type: crc32c 00:06:08.263 CRC-32C seed: 32 00:06:08.263 Transfer size: 4096 bytes 00:06:08.263 Vector count 1 00:06:08.263 Module: software 00:06:08.263 Queue depth: 32 00:06:08.263 Allocate depth: 32 00:06:08.263 # threads/core: 1 00:06:08.263 Run time: 1 seconds 00:06:08.263 Verify: Yes 00:06:08.263 00:06:08.263 Running for 1 seconds... 00:06:08.263 00:06:08.263 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:08.263 ------------------------------------------------------------------------------------ 00:06:08.263 0,0 446528/s 1744 MiB/s 0 0 00:06:08.263 ==================================================================================== 00:06:08.263 Total 446528/s 1744 MiB/s 0 0' 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.263 07:58:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:08.263 07:58:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:08.263 07:58:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.263 07:58:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.263 07:58:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.263 07:58:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.263 07:58:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.263 07:58:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.263 07:58:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.263 07:58:38 -- accel/accel.sh@42 -- # jq -r . 00:06:08.263 [2024-06-11 07:58:38.578541] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:08.263 [2024-06-11 07:58:38.578647] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid846121 ] 00:06:08.263 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.263 [2024-06-11 07:58:38.640478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.263 [2024-06-11 07:58:38.702513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.263 07:58:38 -- accel/accel.sh@21 -- # val= 00:06:08.263 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.263 07:58:38 -- accel/accel.sh@21 -- # val= 00:06:08.263 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.263 07:58:38 -- accel/accel.sh@21 -- # val=0x1 00:06:08.263 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.263 07:58:38 -- accel/accel.sh@21 -- # val= 00:06:08.263 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.263 07:58:38 -- accel/accel.sh@21 -- # val= 00:06:08.263 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.263 07:58:38 -- accel/accel.sh@21 -- # val=crc32c 00:06:08.263 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.263 07:58:38 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.263 07:58:38 -- accel/accel.sh@21 -- # val=32 00:06:08.263 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.263 07:58:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:08.263 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.263 07:58:38 -- accel/accel.sh@21 -- # val= 00:06:08.263 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.263 07:58:38 -- accel/accel.sh@21 -- # val=software 00:06:08.263 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.263 07:58:38 -- accel/accel.sh@23 -- # accel_module=software 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.263 07:58:38 -- accel/accel.sh@21 -- # val=32 00:06:08.263 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.263 07:58:38 -- accel/accel.sh@21 -- # val=32 00:06:08.263 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.263 07:58:38 -- accel/accel.sh@21 -- # val=1 00:06:08.263 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.263 07:58:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:08.263 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.263 07:58:38 -- accel/accel.sh@21 -- # val=Yes 00:06:08.263 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.263 07:58:38 -- accel/accel.sh@21 -- # val= 00:06:08.263 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:08.263 07:58:38 -- accel/accel.sh@21 -- # val= 00:06:08.263 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:08.263 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:09.208 07:58:39 -- accel/accel.sh@21 -- # val= 00:06:09.208 07:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.208 07:58:39 -- accel/accel.sh@20 -- # IFS=: 00:06:09.208 07:58:39 -- accel/accel.sh@20 -- # read -r var val 00:06:09.208 07:58:39 -- accel/accel.sh@21 -- # val= 00:06:09.208 07:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.208 07:58:39 -- accel/accel.sh@20 -- # IFS=: 00:06:09.208 07:58:39 -- accel/accel.sh@20 -- # read -r var val 00:06:09.208 07:58:39 -- accel/accel.sh@21 -- # val= 00:06:09.208 07:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.208 07:58:39 -- accel/accel.sh@20 -- # IFS=: 00:06:09.208 07:58:39 -- accel/accel.sh@20 -- # read -r var val 00:06:09.208 07:58:39 -- accel/accel.sh@21 -- # val= 00:06:09.208 07:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.208 07:58:39 -- accel/accel.sh@20 -- # IFS=: 00:06:09.208 07:58:39 -- accel/accel.sh@20 -- # read -r var val 00:06:09.208 07:58:39 -- accel/accel.sh@21 -- # val= 00:06:09.208 07:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.208 07:58:39 -- accel/accel.sh@20 -- # IFS=: 00:06:09.208 07:58:39 -- accel/accel.sh@20 -- # read -r var val 00:06:09.208 07:58:39 -- accel/accel.sh@21 -- # val= 00:06:09.208 07:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.208 07:58:39 -- accel/accel.sh@20 -- # IFS=: 00:06:09.208 07:58:39 -- accel/accel.sh@20 -- # read -r var val 00:06:09.208 07:58:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:09.208 07:58:39 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:09.208 07:58:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.208 00:06:09.208 real 0m2.556s 00:06:09.208 user 0m2.353s 00:06:09.208 sys 0m0.209s 00:06:09.208 07:58:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.208 07:58:39 -- common/autotest_common.sh@10 -- # set +x 00:06:09.208 ************************************ 00:06:09.208 END TEST accel_crc32c 00:06:09.208 ************************************ 00:06:09.468 07:58:39 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:09.468 07:58:39 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:09.468 07:58:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.468 07:58:39 -- common/autotest_common.sh@10 -- # set +x 00:06:09.468 ************************************ 00:06:09.468 START TEST accel_crc32c_C2 00:06:09.468 ************************************ 00:06:09.468 07:58:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:09.468 07:58:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.468 07:58:39 -- accel/accel.sh@17 -- # local accel_module 00:06:09.468 07:58:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:09.468 07:58:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:09.468 07:58:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.468 07:58:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.468 07:58:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.468 07:58:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.468 07:58:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.468 07:58:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.468 07:58:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.468 07:58:39 -- accel/accel.sh@42 -- # jq -r . 00:06:09.468 [2024-06-11 07:58:39.902747] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:09.468 [2024-06-11 07:58:39.902848] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid846312 ] 00:06:09.468 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.468 [2024-06-11 07:58:39.965949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.468 [2024-06-11 07:58:40.030853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.850 07:58:41 -- accel/accel.sh@18 -- # out=' 00:06:10.850 SPDK Configuration: 00:06:10.850 Core mask: 0x1 00:06:10.850 00:06:10.850 Accel Perf Configuration: 00:06:10.850 Workload Type: crc32c 00:06:10.850 CRC-32C seed: 0 00:06:10.850 Transfer size: 4096 bytes 00:06:10.850 Vector count 2 00:06:10.850 Module: software 00:06:10.850 Queue depth: 32 00:06:10.850 Allocate depth: 32 00:06:10.850 # threads/core: 1 00:06:10.850 Run time: 1 seconds 00:06:10.850 Verify: Yes 00:06:10.850 00:06:10.850 Running for 1 seconds... 00:06:10.850 00:06:10.850 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:10.850 ------------------------------------------------------------------------------------ 00:06:10.850 0,0 377280/s 2947 MiB/s 0 0 00:06:10.850 ==================================================================================== 00:06:10.850 Total 377280/s 1473 MiB/s 0 0' 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:10.850 07:58:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:10.850 07:58:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:10.850 07:58:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.850 07:58:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.850 07:58:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.850 07:58:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.850 07:58:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.850 07:58:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.850 07:58:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.850 07:58:41 -- accel/accel.sh@42 -- # jq -r . 00:06:10.850 [2024-06-11 07:58:41.183028] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:10.850 [2024-06-11 07:58:41.183131] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid846648 ] 00:06:10.850 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.850 [2024-06-11 07:58:41.244600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.850 [2024-06-11 07:58:41.305920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.850 07:58:41 -- accel/accel.sh@21 -- # val= 00:06:10.850 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:10.850 07:58:41 -- accel/accel.sh@21 -- # val= 00:06:10.850 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:10.850 07:58:41 -- accel/accel.sh@21 -- # val=0x1 00:06:10.850 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:10.850 07:58:41 -- accel/accel.sh@21 -- # val= 00:06:10.850 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:10.850 07:58:41 -- accel/accel.sh@21 -- # val= 00:06:10.850 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:10.850 07:58:41 -- accel/accel.sh@21 -- # val=crc32c 00:06:10.850 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.850 07:58:41 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:10.850 07:58:41 -- accel/accel.sh@21 -- # val=0 00:06:10.850 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:10.850 07:58:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:10.850 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:10.850 07:58:41 -- accel/accel.sh@21 -- # val= 00:06:10.850 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:10.850 07:58:41 -- accel/accel.sh@21 -- # val=software 00:06:10.850 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.850 07:58:41 -- accel/accel.sh@23 -- # accel_module=software 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:10.850 07:58:41 -- accel/accel.sh@21 -- # val=32 00:06:10.850 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:10.850 07:58:41 -- accel/accel.sh@21 -- # val=32 00:06:10.850 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:10.850 07:58:41 -- accel/accel.sh@21 -- # val=1 00:06:10.850 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:10.850 07:58:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:10.850 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:10.850 07:58:41 -- accel/accel.sh@21 -- # val=Yes 00:06:10.850 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:10.850 07:58:41 -- accel/accel.sh@21 -- # val= 00:06:10.850 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:10.850 07:58:41 -- accel/accel.sh@21 -- # val= 00:06:10.850 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:10.850 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:11.790 07:58:42 -- accel/accel.sh@21 -- # val= 00:06:11.790 07:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.790 07:58:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.790 07:58:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.790 07:58:42 -- accel/accel.sh@21 -- # val= 00:06:11.790 07:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.790 07:58:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.790 07:58:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.790 07:58:42 -- accel/accel.sh@21 -- # val= 00:06:11.790 07:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.791 07:58:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.791 07:58:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.791 07:58:42 -- accel/accel.sh@21 -- # val= 00:06:11.791 07:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.791 07:58:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.791 07:58:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.791 07:58:42 -- accel/accel.sh@21 -- # val= 00:06:11.791 07:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.791 07:58:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.791 07:58:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.791 07:58:42 -- accel/accel.sh@21 -- # val= 00:06:11.791 07:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.791 07:58:42 -- accel/accel.sh@20 -- # IFS=: 00:06:11.791 07:58:42 -- accel/accel.sh@20 -- # read -r var val 00:06:11.791 07:58:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:11.791 07:58:42 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:11.791 07:58:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.791 00:06:11.791 real 0m2.560s 00:06:11.791 user 0m2.361s 00:06:11.791 sys 0m0.205s 00:06:11.791 07:58:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.791 07:58:42 -- common/autotest_common.sh@10 -- # set +x 00:06:11.791 ************************************ 00:06:11.791 END TEST accel_crc32c_C2 00:06:11.791 ************************************ 00:06:12.051 07:58:42 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:12.051 07:58:42 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:12.051 07:58:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.051 07:58:42 -- common/autotest_common.sh@10 -- # set +x 00:06:12.051 ************************************ 00:06:12.051 START TEST accel_copy 00:06:12.051 ************************************ 00:06:12.051 07:58:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:12.051 07:58:42 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.051 07:58:42 -- accel/accel.sh@17 -- # local accel_module 00:06:12.051 07:58:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:12.051 07:58:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:12.051 07:58:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.051 07:58:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.051 07:58:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.051 07:58:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.051 07:58:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.051 07:58:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.051 07:58:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.051 07:58:42 -- accel/accel.sh@42 -- # jq -r . 00:06:12.051 [2024-06-11 07:58:42.504075] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:12.051 [2024-06-11 07:58:42.504143] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid847003 ] 00:06:12.051 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.051 [2024-06-11 07:58:42.565971] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.051 [2024-06-11 07:58:42.630575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.437 07:58:43 -- accel/accel.sh@18 -- # out=' 00:06:13.437 SPDK Configuration: 00:06:13.437 Core mask: 0x1 00:06:13.437 00:06:13.437 Accel Perf Configuration: 00:06:13.437 Workload Type: copy 00:06:13.437 Transfer size: 4096 bytes 00:06:13.437 Vector count 1 00:06:13.437 Module: software 00:06:13.437 Queue depth: 32 00:06:13.437 Allocate depth: 32 00:06:13.437 # threads/core: 1 00:06:13.437 Run time: 1 seconds 00:06:13.437 Verify: Yes 00:06:13.437 00:06:13.437 Running for 1 seconds... 00:06:13.437 00:06:13.437 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:13.437 ------------------------------------------------------------------------------------ 00:06:13.437 0,0 305024/s 1191 MiB/s 0 0 00:06:13.437 ==================================================================================== 00:06:13.437 Total 305024/s 1191 MiB/s 0 0' 00:06:13.437 07:58:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.437 07:58:43 -- accel/accel.sh@20 -- # read -r var val 00:06:13.437 07:58:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:13.437 07:58:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:13.437 07:58:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.437 07:58:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.437 07:58:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.437 07:58:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.437 07:58:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.437 07:58:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.437 07:58:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.437 07:58:43 -- accel/accel.sh@42 -- # jq -r . 00:06:13.437 [2024-06-11 07:58:43.783867] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:13.438 [2024-06-11 07:58:43.783965] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid847257 ] 00:06:13.438 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.438 [2024-06-11 07:58:43.845258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.438 [2024-06-11 07:58:43.906994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.438 07:58:43 -- accel/accel.sh@21 -- # val= 00:06:13.438 07:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # read -r var val 00:06:13.438 07:58:43 -- accel/accel.sh@21 -- # val= 00:06:13.438 07:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # read -r var val 00:06:13.438 07:58:43 -- accel/accel.sh@21 -- # val=0x1 00:06:13.438 07:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # read -r var val 00:06:13.438 07:58:43 -- accel/accel.sh@21 -- # val= 00:06:13.438 07:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # read -r var val 00:06:13.438 07:58:43 -- accel/accel.sh@21 -- # val= 00:06:13.438 07:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # read -r var val 00:06:13.438 07:58:43 -- accel/accel.sh@21 -- # val=copy 00:06:13.438 07:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.438 07:58:43 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # read -r var val 00:06:13.438 07:58:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:13.438 07:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # read -r var val 00:06:13.438 07:58:43 -- accel/accel.sh@21 -- # val= 00:06:13.438 07:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # read -r var val 00:06:13.438 07:58:43 -- accel/accel.sh@21 -- # val=software 00:06:13.438 07:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.438 07:58:43 -- accel/accel.sh@23 -- # accel_module=software 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # read -r var val 00:06:13.438 07:58:43 -- accel/accel.sh@21 -- # val=32 00:06:13.438 07:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # read -r var val 00:06:13.438 07:58:43 -- accel/accel.sh@21 -- # val=32 00:06:13.438 07:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # read -r var val 00:06:13.438 07:58:43 -- accel/accel.sh@21 -- # val=1 00:06:13.438 07:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # read -r var val 00:06:13.438 07:58:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:13.438 07:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # read -r var val 00:06:13.438 07:58:43 -- accel/accel.sh@21 -- # val=Yes 00:06:13.438 07:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # read -r var val 00:06:13.438 07:58:43 -- accel/accel.sh@21 -- # val= 00:06:13.438 07:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # read -r var val 00:06:13.438 07:58:43 -- accel/accel.sh@21 -- # val= 00:06:13.438 07:58:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # IFS=: 00:06:13.438 07:58:43 -- accel/accel.sh@20 -- # read -r var val 00:06:14.821 07:58:45 -- accel/accel.sh@21 -- # val= 00:06:14.821 07:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.821 07:58:45 -- accel/accel.sh@20 -- # IFS=: 00:06:14.821 07:58:45 -- accel/accel.sh@20 -- # read -r var val 00:06:14.821 07:58:45 -- accel/accel.sh@21 -- # val= 00:06:14.821 07:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.821 07:58:45 -- accel/accel.sh@20 -- # IFS=: 00:06:14.821 07:58:45 -- accel/accel.sh@20 -- # read -r var val 00:06:14.821 07:58:45 -- accel/accel.sh@21 -- # val= 00:06:14.821 07:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.821 07:58:45 -- accel/accel.sh@20 -- # IFS=: 00:06:14.821 07:58:45 -- accel/accel.sh@20 -- # read -r var val 00:06:14.821 07:58:45 -- accel/accel.sh@21 -- # val= 00:06:14.821 07:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.821 07:58:45 -- accel/accel.sh@20 -- # IFS=: 00:06:14.821 07:58:45 -- accel/accel.sh@20 -- # read -r var val 00:06:14.821 07:58:45 -- accel/accel.sh@21 -- # val= 00:06:14.821 07:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.821 07:58:45 -- accel/accel.sh@20 -- # IFS=: 00:06:14.821 07:58:45 -- accel/accel.sh@20 -- # read -r var val 00:06:14.821 07:58:45 -- accel/accel.sh@21 -- # val= 00:06:14.821 07:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.821 07:58:45 -- accel/accel.sh@20 -- # IFS=: 00:06:14.821 07:58:45 -- accel/accel.sh@20 -- # read -r var val 00:06:14.821 07:58:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:14.821 07:58:45 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:14.821 07:58:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.821 00:06:14.821 real 0m2.560s 00:06:14.821 user 0m2.361s 00:06:14.821 sys 0m0.205s 00:06:14.821 07:58:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.821 07:58:45 -- common/autotest_common.sh@10 -- # set +x 00:06:14.821 ************************************ 00:06:14.821 END TEST accel_copy 00:06:14.821 ************************************ 00:06:14.821 07:58:45 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:14.821 07:58:45 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:14.821 07:58:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:14.821 07:58:45 -- common/autotest_common.sh@10 -- # set +x 00:06:14.821 ************************************ 00:06:14.821 START TEST accel_fill 00:06:14.821 ************************************ 00:06:14.821 07:58:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:14.821 07:58:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:14.821 07:58:45 -- accel/accel.sh@17 -- # local accel_module 00:06:14.821 07:58:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:14.821 07:58:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:14.821 07:58:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.821 07:58:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.821 07:58:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.821 07:58:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.821 07:58:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.821 07:58:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.821 07:58:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.821 07:58:45 -- accel/accel.sh@42 -- # jq -r . 00:06:14.821 [2024-06-11 07:58:45.105210] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:14.821 [2024-06-11 07:58:45.105281] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid847416 ] 00:06:14.821 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.822 [2024-06-11 07:58:45.167239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.822 [2024-06-11 07:58:45.231178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.763 07:58:46 -- accel/accel.sh@18 -- # out=' 00:06:15.763 SPDK Configuration: 00:06:15.763 Core mask: 0x1 00:06:15.763 00:06:15.763 Accel Perf Configuration: 00:06:15.763 Workload Type: fill 00:06:15.763 Fill pattern: 0x80 00:06:15.763 Transfer size: 4096 bytes 00:06:15.763 Vector count 1 00:06:15.763 Module: software 00:06:15.763 Queue depth: 64 00:06:15.763 Allocate depth: 64 00:06:15.763 # threads/core: 1 00:06:15.763 Run time: 1 seconds 00:06:15.763 Verify: Yes 00:06:15.764 00:06:15.764 Running for 1 seconds... 00:06:15.764 00:06:15.764 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:15.764 ------------------------------------------------------------------------------------ 00:06:15.764 0,0 470656/s 1838 MiB/s 0 0 00:06:15.764 ==================================================================================== 00:06:15.764 Total 470656/s 1838 MiB/s 0 0' 00:06:15.764 07:58:46 -- accel/accel.sh@20 -- # IFS=: 00:06:15.764 07:58:46 -- accel/accel.sh@20 -- # read -r var val 00:06:15.764 07:58:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:15.764 07:58:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:15.764 07:58:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.764 07:58:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.764 07:58:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.764 07:58:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.764 07:58:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.764 07:58:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.764 07:58:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.764 07:58:46 -- accel/accel.sh@42 -- # jq -r . 00:06:15.764 [2024-06-11 07:58:46.381614] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:15.764 [2024-06-11 07:58:46.381692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid847714 ] 00:06:15.764 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.024 [2024-06-11 07:58:46.442990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.024 [2024-06-11 07:58:46.504391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.024 07:58:46 -- accel/accel.sh@21 -- # val= 00:06:16.024 07:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # IFS=: 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # read -r var val 00:06:16.024 07:58:46 -- accel/accel.sh@21 -- # val= 00:06:16.024 07:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # IFS=: 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # read -r var val 00:06:16.024 07:58:46 -- accel/accel.sh@21 -- # val=0x1 00:06:16.024 07:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # IFS=: 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # read -r var val 00:06:16.024 07:58:46 -- accel/accel.sh@21 -- # val= 00:06:16.024 07:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # IFS=: 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # read -r var val 00:06:16.024 07:58:46 -- accel/accel.sh@21 -- # val= 00:06:16.024 07:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # IFS=: 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # read -r var val 00:06:16.024 07:58:46 -- accel/accel.sh@21 -- # val=fill 00:06:16.024 07:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.024 07:58:46 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # IFS=: 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # read -r var val 00:06:16.024 07:58:46 -- accel/accel.sh@21 -- # val=0x80 00:06:16.024 07:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # IFS=: 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # read -r var val 00:06:16.024 07:58:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:16.024 07:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # IFS=: 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # read -r var val 00:06:16.024 07:58:46 -- accel/accel.sh@21 -- # val= 00:06:16.024 07:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # IFS=: 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # read -r var val 00:06:16.024 07:58:46 -- accel/accel.sh@21 -- # val=software 00:06:16.024 07:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.024 07:58:46 -- accel/accel.sh@23 -- # accel_module=software 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # IFS=: 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # read -r var val 00:06:16.024 07:58:46 -- accel/accel.sh@21 -- # val=64 00:06:16.024 07:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # IFS=: 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # read -r var val 00:06:16.024 07:58:46 -- accel/accel.sh@21 -- # val=64 00:06:16.024 07:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # IFS=: 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # read -r var val 00:06:16.024 07:58:46 -- accel/accel.sh@21 -- # val=1 00:06:16.024 07:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # IFS=: 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # read -r var val 00:06:16.024 07:58:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:16.024 07:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # IFS=: 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # read -r var val 00:06:16.024 07:58:46 -- accel/accel.sh@21 -- # val=Yes 00:06:16.024 07:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # IFS=: 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # read -r var val 00:06:16.024 07:58:46 -- accel/accel.sh@21 -- # val= 00:06:16.024 07:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # IFS=: 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # read -r var val 00:06:16.024 07:58:46 -- accel/accel.sh@21 -- # val= 00:06:16.024 07:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # IFS=: 00:06:16.024 07:58:46 -- accel/accel.sh@20 -- # read -r var val 00:06:17.408 07:58:47 -- accel/accel.sh@21 -- # val= 00:06:17.408 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.408 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:17.408 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:17.408 07:58:47 -- accel/accel.sh@21 -- # val= 00:06:17.408 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.408 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:17.408 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:17.408 07:58:47 -- accel/accel.sh@21 -- # val= 00:06:17.408 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.408 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:17.408 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:17.408 07:58:47 -- accel/accel.sh@21 -- # val= 00:06:17.408 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.408 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:17.408 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:17.408 07:58:47 -- accel/accel.sh@21 -- # val= 00:06:17.408 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.408 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:17.408 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:17.408 07:58:47 -- accel/accel.sh@21 -- # val= 00:06:17.408 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.408 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:17.408 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:17.408 07:58:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:17.408 07:58:47 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:17.408 07:58:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.408 00:06:17.408 real 0m2.554s 00:06:17.408 user 0m2.363s 00:06:17.408 sys 0m0.196s 00:06:17.408 07:58:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.408 07:58:47 -- common/autotest_common.sh@10 -- # set +x 00:06:17.408 ************************************ 00:06:17.408 END TEST accel_fill 00:06:17.408 ************************************ 00:06:17.409 07:58:47 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:17.409 07:58:47 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:17.409 07:58:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.409 07:58:47 -- common/autotest_common.sh@10 -- # set +x 00:06:17.409 ************************************ 00:06:17.409 START TEST accel_copy_crc32c 00:06:17.409 ************************************ 00:06:17.409 07:58:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:17.409 07:58:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.409 07:58:47 -- accel/accel.sh@17 -- # local accel_module 00:06:17.409 07:58:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:17.409 07:58:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:17.409 07:58:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.409 07:58:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.409 07:58:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.409 07:58:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.409 07:58:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.409 07:58:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.409 07:58:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.409 07:58:47 -- accel/accel.sh@42 -- # jq -r . 00:06:17.409 [2024-06-11 07:58:47.702969] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:17.409 [2024-06-11 07:58:47.703069] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid848065 ] 00:06:17.409 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.409 [2024-06-11 07:58:47.782047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.409 [2024-06-11 07:58:47.848048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.350 07:58:48 -- accel/accel.sh@18 -- # out=' 00:06:18.350 SPDK Configuration: 00:06:18.350 Core mask: 0x1 00:06:18.350 00:06:18.350 Accel Perf Configuration: 00:06:18.350 Workload Type: copy_crc32c 00:06:18.350 CRC-32C seed: 0 00:06:18.350 Vector size: 4096 bytes 00:06:18.350 Transfer size: 4096 bytes 00:06:18.350 Vector count 1 00:06:18.350 Module: software 00:06:18.350 Queue depth: 32 00:06:18.350 Allocate depth: 32 00:06:18.350 # threads/core: 1 00:06:18.350 Run time: 1 seconds 00:06:18.350 Verify: Yes 00:06:18.350 00:06:18.350 Running for 1 seconds... 00:06:18.350 00:06:18.350 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:18.350 ------------------------------------------------------------------------------------ 00:06:18.350 0,0 248128/s 969 MiB/s 0 0 00:06:18.350 ==================================================================================== 00:06:18.350 Total 248128/s 969 MiB/s 0 0' 00:06:18.350 07:58:48 -- accel/accel.sh@20 -- # IFS=: 00:06:18.350 07:58:48 -- accel/accel.sh@20 -- # read -r var val 00:06:18.350 07:58:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:18.350 07:58:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:18.350 07:58:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.350 07:58:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.350 07:58:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.350 07:58:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.350 07:58:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.350 07:58:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.350 07:58:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.350 07:58:48 -- accel/accel.sh@42 -- # jq -r . 00:06:18.610 [2024-06-11 07:58:49.000137] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:18.610 [2024-06-11 07:58:49.000232] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid848395 ] 00:06:18.610 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.610 [2024-06-11 07:58:49.061272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.610 [2024-06-11 07:58:49.122849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.611 07:58:49 -- accel/accel.sh@21 -- # val= 00:06:18.611 07:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # IFS=: 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # read -r var val 00:06:18.611 07:58:49 -- accel/accel.sh@21 -- # val= 00:06:18.611 07:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # IFS=: 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # read -r var val 00:06:18.611 07:58:49 -- accel/accel.sh@21 -- # val=0x1 00:06:18.611 07:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # IFS=: 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # read -r var val 00:06:18.611 07:58:49 -- accel/accel.sh@21 -- # val= 00:06:18.611 07:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # IFS=: 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # read -r var val 00:06:18.611 07:58:49 -- accel/accel.sh@21 -- # val= 00:06:18.611 07:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # IFS=: 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # read -r var val 00:06:18.611 07:58:49 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:18.611 07:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.611 07:58:49 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # IFS=: 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # read -r var val 00:06:18.611 07:58:49 -- accel/accel.sh@21 -- # val=0 00:06:18.611 07:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # IFS=: 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # read -r var val 00:06:18.611 07:58:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:18.611 07:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # IFS=: 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # read -r var val 00:06:18.611 07:58:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:18.611 07:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # IFS=: 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # read -r var val 00:06:18.611 07:58:49 -- accel/accel.sh@21 -- # val= 00:06:18.611 07:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # IFS=: 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # read -r var val 00:06:18.611 07:58:49 -- accel/accel.sh@21 -- # val=software 00:06:18.611 07:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.611 07:58:49 -- accel/accel.sh@23 -- # accel_module=software 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # IFS=: 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # read -r var val 00:06:18.611 07:58:49 -- accel/accel.sh@21 -- # val=32 00:06:18.611 07:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # IFS=: 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # read -r var val 00:06:18.611 07:58:49 -- accel/accel.sh@21 -- # val=32 00:06:18.611 07:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # IFS=: 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # read -r var val 00:06:18.611 07:58:49 -- accel/accel.sh@21 -- # val=1 00:06:18.611 07:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # IFS=: 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # read -r var val 00:06:18.611 07:58:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:18.611 07:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # IFS=: 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # read -r var val 00:06:18.611 07:58:49 -- accel/accel.sh@21 -- # val=Yes 00:06:18.611 07:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # IFS=: 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # read -r var val 00:06:18.611 07:58:49 -- accel/accel.sh@21 -- # val= 00:06:18.611 07:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # IFS=: 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # read -r var val 00:06:18.611 07:58:49 -- accel/accel.sh@21 -- # val= 00:06:18.611 07:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # IFS=: 00:06:18.611 07:58:49 -- accel/accel.sh@20 -- # read -r var val 00:06:19.995 07:58:50 -- accel/accel.sh@21 -- # val= 00:06:19.995 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.995 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.995 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.995 07:58:50 -- accel/accel.sh@21 -- # val= 00:06:19.995 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.995 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.995 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.995 07:58:50 -- accel/accel.sh@21 -- # val= 00:06:19.995 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.995 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.995 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.995 07:58:50 -- accel/accel.sh@21 -- # val= 00:06:19.995 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.995 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.995 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.995 07:58:50 -- accel/accel.sh@21 -- # val= 00:06:19.995 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.995 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.995 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.996 07:58:50 -- accel/accel.sh@21 -- # val= 00:06:19.996 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.996 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:19.996 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:19.996 07:58:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:19.996 07:58:50 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:19.996 07:58:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.996 00:06:19.996 real 0m2.578s 00:06:19.996 user 0m2.377s 00:06:19.996 sys 0m0.207s 00:06:19.996 07:58:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.996 07:58:50 -- common/autotest_common.sh@10 -- # set +x 00:06:19.996 ************************************ 00:06:19.996 END TEST accel_copy_crc32c 00:06:19.996 ************************************ 00:06:19.996 07:58:50 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:19.996 07:58:50 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:19.996 07:58:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:19.996 07:58:50 -- common/autotest_common.sh@10 -- # set +x 00:06:19.996 ************************************ 00:06:19.996 START TEST accel_copy_crc32c_C2 00:06:19.996 ************************************ 00:06:19.996 07:58:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:19.996 07:58:50 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.996 07:58:50 -- accel/accel.sh@17 -- # local accel_module 00:06:19.996 07:58:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:19.996 07:58:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:19.996 07:58:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.996 07:58:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.996 07:58:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.996 07:58:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.996 07:58:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.996 07:58:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.996 07:58:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.996 07:58:50 -- accel/accel.sh@42 -- # jq -r . 00:06:19.996 [2024-06-11 07:58:50.324348] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:19.996 [2024-06-11 07:58:50.324457] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid848574 ] 00:06:19.996 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.996 [2024-06-11 07:58:50.387101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.996 [2024-06-11 07:58:50.449948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.937 07:58:51 -- accel/accel.sh@18 -- # out=' 00:06:20.937 SPDK Configuration: 00:06:20.937 Core mask: 0x1 00:06:20.937 00:06:20.937 Accel Perf Configuration: 00:06:20.937 Workload Type: copy_crc32c 00:06:20.937 CRC-32C seed: 0 00:06:20.937 Vector size: 4096 bytes 00:06:20.937 Transfer size: 8192 bytes 00:06:20.937 Vector count 2 00:06:20.937 Module: software 00:06:20.937 Queue depth: 32 00:06:20.937 Allocate depth: 32 00:06:20.937 # threads/core: 1 00:06:20.937 Run time: 1 seconds 00:06:20.937 Verify: Yes 00:06:20.937 00:06:20.937 Running for 1 seconds... 00:06:20.937 00:06:20.937 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:20.937 ------------------------------------------------------------------------------------ 00:06:20.937 0,0 187328/s 1463 MiB/s 0 0 00:06:20.937 ==================================================================================== 00:06:20.937 Total 187328/s 731 MiB/s 0 0' 00:06:20.937 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:20.937 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:20.937 07:58:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:20.937 07:58:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:20.937 07:58:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.937 07:58:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.937 07:58:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.937 07:58:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.937 07:58:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.937 07:58:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.937 07:58:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.937 07:58:51 -- accel/accel.sh@42 -- # jq -r . 00:06:21.198 [2024-06-11 07:58:51.602335] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:21.198 [2024-06-11 07:58:51.602432] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid848775 ] 00:06:21.198 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.198 [2024-06-11 07:58:51.664333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.198 [2024-06-11 07:58:51.726146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.198 07:58:51 -- accel/accel.sh@21 -- # val= 00:06:21.198 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.198 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.198 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.198 07:58:51 -- accel/accel.sh@21 -- # val= 00:06:21.198 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.198 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.198 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.198 07:58:51 -- accel/accel.sh@21 -- # val=0x1 00:06:21.198 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.198 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.198 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.198 07:58:51 -- accel/accel.sh@21 -- # val= 00:06:21.198 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.198 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.198 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.198 07:58:51 -- accel/accel.sh@21 -- # val= 00:06:21.198 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.198 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.198 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.198 07:58:51 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:21.198 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.198 07:58:51 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:21.198 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.198 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.198 07:58:51 -- accel/accel.sh@21 -- # val=0 00:06:21.198 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.198 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.198 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.199 07:58:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:21.199 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.199 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.199 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.199 07:58:51 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:21.199 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.199 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.199 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.199 07:58:51 -- accel/accel.sh@21 -- # val= 00:06:21.199 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.199 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.199 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.199 07:58:51 -- accel/accel.sh@21 -- # val=software 00:06:21.199 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.199 07:58:51 -- accel/accel.sh@23 -- # accel_module=software 00:06:21.199 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.199 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.199 07:58:51 -- accel/accel.sh@21 -- # val=32 00:06:21.199 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.199 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.199 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.199 07:58:51 -- accel/accel.sh@21 -- # val=32 00:06:21.199 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.199 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.199 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.199 07:58:51 -- accel/accel.sh@21 -- # val=1 00:06:21.199 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.199 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.199 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.199 07:58:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:21.199 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.199 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.199 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.199 07:58:51 -- accel/accel.sh@21 -- # val=Yes 00:06:21.199 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.199 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.199 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.199 07:58:51 -- accel/accel.sh@21 -- # val= 00:06:21.199 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.199 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.199 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:21.199 07:58:51 -- accel/accel.sh@21 -- # val= 00:06:21.199 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.199 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:21.199 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:22.583 07:58:52 -- accel/accel.sh@21 -- # val= 00:06:22.583 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.583 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:06:22.583 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:06:22.583 07:58:52 -- accel/accel.sh@21 -- # val= 00:06:22.583 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.583 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:06:22.583 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:06:22.583 07:58:52 -- accel/accel.sh@21 -- # val= 00:06:22.583 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.583 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:06:22.583 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:06:22.583 07:58:52 -- accel/accel.sh@21 -- # val= 00:06:22.583 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.583 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:06:22.583 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:06:22.583 07:58:52 -- accel/accel.sh@21 -- # val= 00:06:22.583 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.583 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:06:22.583 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:06:22.583 07:58:52 -- accel/accel.sh@21 -- # val= 00:06:22.583 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.583 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:06:22.583 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:06:22.583 07:58:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:22.583 07:58:52 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:22.583 07:58:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.583 00:06:22.583 real 0m2.559s 00:06:22.583 user 0m2.373s 00:06:22.583 sys 0m0.193s 00:06:22.583 07:58:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.583 07:58:52 -- common/autotest_common.sh@10 -- # set +x 00:06:22.583 ************************************ 00:06:22.583 END TEST accel_copy_crc32c_C2 00:06:22.583 ************************************ 00:06:22.583 07:58:52 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:22.583 07:58:52 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:22.583 07:58:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:22.583 07:58:52 -- common/autotest_common.sh@10 -- # set +x 00:06:22.583 ************************************ 00:06:22.583 START TEST accel_dualcast 00:06:22.583 ************************************ 00:06:22.583 07:58:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:06:22.583 07:58:52 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.583 07:58:52 -- accel/accel.sh@17 -- # local accel_module 00:06:22.583 07:58:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:22.583 07:58:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:22.583 07:58:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.583 07:58:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.583 07:58:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.583 07:58:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.583 07:58:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.583 07:58:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.583 07:58:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.583 07:58:52 -- accel/accel.sh@42 -- # jq -r . 00:06:22.583 [2024-06-11 07:58:52.925929] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:22.583 [2024-06-11 07:58:52.926030] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid849124 ] 00:06:22.583 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.583 [2024-06-11 07:58:52.988747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.583 [2024-06-11 07:58:53.053232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.968 07:58:54 -- accel/accel.sh@18 -- # out=' 00:06:23.968 SPDK Configuration: 00:06:23.968 Core mask: 0x1 00:06:23.968 00:06:23.968 Accel Perf Configuration: 00:06:23.968 Workload Type: dualcast 00:06:23.968 Transfer size: 4096 bytes 00:06:23.968 Vector count 1 00:06:23.968 Module: software 00:06:23.968 Queue depth: 32 00:06:23.968 Allocate depth: 32 00:06:23.968 # threads/core: 1 00:06:23.968 Run time: 1 seconds 00:06:23.968 Verify: Yes 00:06:23.968 00:06:23.968 Running for 1 seconds... 00:06:23.968 00:06:23.968 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:23.968 ------------------------------------------------------------------------------------ 00:06:23.968 0,0 361344/s 1411 MiB/s 0 0 00:06:23.968 ==================================================================================== 00:06:23.968 Total 361344/s 1411 MiB/s 0 0' 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.968 07:58:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:23.968 07:58:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:23.968 07:58:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.968 07:58:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.968 07:58:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.968 07:58:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.968 07:58:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.968 07:58:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.968 07:58:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.968 07:58:54 -- accel/accel.sh@42 -- # jq -r . 00:06:23.968 [2024-06-11 07:58:54.203595] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:23.968 [2024-06-11 07:58:54.203665] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid849460 ] 00:06:23.968 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.968 [2024-06-11 07:58:54.264680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.968 [2024-06-11 07:58:54.325408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.968 07:58:54 -- accel/accel.sh@21 -- # val= 00:06:23.968 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.968 07:58:54 -- accel/accel.sh@21 -- # val= 00:06:23.968 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.968 07:58:54 -- accel/accel.sh@21 -- # val=0x1 00:06:23.968 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.968 07:58:54 -- accel/accel.sh@21 -- # val= 00:06:23.968 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.968 07:58:54 -- accel/accel.sh@21 -- # val= 00:06:23.968 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.968 07:58:54 -- accel/accel.sh@21 -- # val=dualcast 00:06:23.968 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.968 07:58:54 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.968 07:58:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:23.968 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.968 07:58:54 -- accel/accel.sh@21 -- # val= 00:06:23.968 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.968 07:58:54 -- accel/accel.sh@21 -- # val=software 00:06:23.968 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.968 07:58:54 -- accel/accel.sh@23 -- # accel_module=software 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.968 07:58:54 -- accel/accel.sh@21 -- # val=32 00:06:23.968 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.968 07:58:54 -- accel/accel.sh@21 -- # val=32 00:06:23.968 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.968 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.969 07:58:54 -- accel/accel.sh@21 -- # val=1 00:06:23.969 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.969 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.969 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.969 07:58:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:23.969 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.969 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.969 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.969 07:58:54 -- accel/accel.sh@21 -- # val=Yes 00:06:23.969 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.969 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.969 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.969 07:58:54 -- accel/accel.sh@21 -- # val= 00:06:23.969 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.969 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.969 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:23.969 07:58:54 -- accel/accel.sh@21 -- # val= 00:06:23.969 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.969 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:23.969 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:24.909 07:58:55 -- accel/accel.sh@21 -- # val= 00:06:24.909 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.909 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:06:24.909 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:06:24.909 07:58:55 -- accel/accel.sh@21 -- # val= 00:06:24.909 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.909 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:06:24.909 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:06:24.909 07:58:55 -- accel/accel.sh@21 -- # val= 00:06:24.909 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.909 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:06:24.909 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:06:24.909 07:58:55 -- accel/accel.sh@21 -- # val= 00:06:24.909 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.909 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:06:24.909 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:06:24.909 07:58:55 -- accel/accel.sh@21 -- # val= 00:06:24.909 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.909 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:06:24.909 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:06:24.909 07:58:55 -- accel/accel.sh@21 -- # val= 00:06:24.909 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.909 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:06:24.909 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:06:24.909 07:58:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:24.909 07:58:55 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:24.909 07:58:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.909 00:06:24.909 real 0m2.557s 00:06:24.909 user 0m2.359s 00:06:24.909 sys 0m0.203s 00:06:24.909 07:58:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.909 07:58:55 -- common/autotest_common.sh@10 -- # set +x 00:06:24.909 ************************************ 00:06:24.909 END TEST accel_dualcast 00:06:24.909 ************************************ 00:06:24.909 07:58:55 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:24.909 07:58:55 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:24.909 07:58:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.909 07:58:55 -- common/autotest_common.sh@10 -- # set +x 00:06:24.909 ************************************ 00:06:24.909 START TEST accel_compare 00:06:24.909 ************************************ 00:06:24.909 07:58:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:24.909 07:58:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.909 07:58:55 -- accel/accel.sh@17 -- # local accel_module 00:06:24.909 07:58:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:24.909 07:58:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:24.909 07:58:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.909 07:58:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.909 07:58:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.909 07:58:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.909 07:58:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.909 07:58:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.910 07:58:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.910 07:58:55 -- accel/accel.sh@42 -- # jq -r . 00:06:24.910 [2024-06-11 07:58:55.526375] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:24.910 [2024-06-11 07:58:55.526482] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid849678 ] 00:06:24.910 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.169 [2024-06-11 07:58:55.589575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.169 [2024-06-11 07:58:55.654504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.553 07:58:56 -- accel/accel.sh@18 -- # out=' 00:06:26.553 SPDK Configuration: 00:06:26.553 Core mask: 0x1 00:06:26.553 00:06:26.553 Accel Perf Configuration: 00:06:26.553 Workload Type: compare 00:06:26.553 Transfer size: 4096 bytes 00:06:26.553 Vector count 1 00:06:26.553 Module: software 00:06:26.553 Queue depth: 32 00:06:26.553 Allocate depth: 32 00:06:26.553 # threads/core: 1 00:06:26.553 Run time: 1 seconds 00:06:26.553 Verify: Yes 00:06:26.553 00:06:26.553 Running for 1 seconds... 00:06:26.553 00:06:26.553 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:26.553 ------------------------------------------------------------------------------------ 00:06:26.553 0,0 436928/s 1706 MiB/s 0 0 00:06:26.553 ==================================================================================== 00:06:26.553 Total 436928/s 1706 MiB/s 0 0' 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:26.553 07:58:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:26.553 07:58:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:26.553 07:58:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.553 07:58:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.553 07:58:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.553 07:58:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.553 07:58:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.553 07:58:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.553 07:58:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.553 07:58:56 -- accel/accel.sh@42 -- # jq -r . 00:06:26.553 [2024-06-11 07:58:56.807072] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:26.553 [2024-06-11 07:58:56.807172] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid849842 ] 00:06:26.553 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.553 [2024-06-11 07:58:56.869047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.553 [2024-06-11 07:58:56.930743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.553 07:58:56 -- accel/accel.sh@21 -- # val= 00:06:26.553 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:26.553 07:58:56 -- accel/accel.sh@21 -- # val= 00:06:26.553 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:26.553 07:58:56 -- accel/accel.sh@21 -- # val=0x1 00:06:26.553 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:26.553 07:58:56 -- accel/accel.sh@21 -- # val= 00:06:26.553 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:26.553 07:58:56 -- accel/accel.sh@21 -- # val= 00:06:26.553 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:26.553 07:58:56 -- accel/accel.sh@21 -- # val=compare 00:06:26.553 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.553 07:58:56 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:26.553 07:58:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:26.553 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:26.553 07:58:56 -- accel/accel.sh@21 -- # val= 00:06:26.553 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:26.553 07:58:56 -- accel/accel.sh@21 -- # val=software 00:06:26.553 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.553 07:58:56 -- accel/accel.sh@23 -- # accel_module=software 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:26.553 07:58:56 -- accel/accel.sh@21 -- # val=32 00:06:26.553 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:26.553 07:58:56 -- accel/accel.sh@21 -- # val=32 00:06:26.553 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:26.553 07:58:56 -- accel/accel.sh@21 -- # val=1 00:06:26.553 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:26.553 07:58:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:26.553 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:26.553 07:58:56 -- accel/accel.sh@21 -- # val=Yes 00:06:26.553 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:26.553 07:58:56 -- accel/accel.sh@21 -- # val= 00:06:26.553 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:26.553 07:58:56 -- accel/accel.sh@21 -- # val= 00:06:26.553 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:26.553 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:27.495 07:58:58 -- accel/accel.sh@21 -- # val= 00:06:27.495 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.495 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.495 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.495 07:58:58 -- accel/accel.sh@21 -- # val= 00:06:27.495 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.495 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.495 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.495 07:58:58 -- accel/accel.sh@21 -- # val= 00:06:27.495 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.495 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.495 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.495 07:58:58 -- accel/accel.sh@21 -- # val= 00:06:27.495 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.495 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.495 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.495 07:58:58 -- accel/accel.sh@21 -- # val= 00:06:27.495 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.495 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.495 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.495 07:58:58 -- accel/accel.sh@21 -- # val= 00:06:27.495 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.495 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:06:27.495 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:06:27.495 07:58:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:27.495 07:58:58 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:27.495 07:58:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.495 00:06:27.495 real 0m2.562s 00:06:27.495 user 0m2.362s 00:06:27.495 sys 0m0.207s 00:06:27.495 07:58:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.495 07:58:58 -- common/autotest_common.sh@10 -- # set +x 00:06:27.495 ************************************ 00:06:27.495 END TEST accel_compare 00:06:27.495 ************************************ 00:06:27.495 07:58:58 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:27.495 07:58:58 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:27.495 07:58:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.495 07:58:58 -- common/autotest_common.sh@10 -- # set +x 00:06:27.495 ************************************ 00:06:27.495 START TEST accel_xor 00:06:27.495 ************************************ 00:06:27.495 07:58:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:06:27.495 07:58:58 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.495 07:58:58 -- accel/accel.sh@17 -- # local accel_module 00:06:27.495 07:58:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:27.495 07:58:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:27.495 07:58:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.495 07:58:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.495 07:58:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.495 07:58:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.495 07:58:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.495 07:58:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.495 07:58:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.495 07:58:58 -- accel/accel.sh@42 -- # jq -r . 00:06:27.495 [2024-06-11 07:58:58.129034] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:27.495 [2024-06-11 07:58:58.129105] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid850184 ] 00:06:27.756 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.756 [2024-06-11 07:58:58.191168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.756 [2024-06-11 07:58:58.254974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.140 07:58:59 -- accel/accel.sh@18 -- # out=' 00:06:29.140 SPDK Configuration: 00:06:29.140 Core mask: 0x1 00:06:29.140 00:06:29.140 Accel Perf Configuration: 00:06:29.140 Workload Type: xor 00:06:29.140 Source buffers: 2 00:06:29.140 Transfer size: 4096 bytes 00:06:29.140 Vector count 1 00:06:29.140 Module: software 00:06:29.140 Queue depth: 32 00:06:29.140 Allocate depth: 32 00:06:29.140 # threads/core: 1 00:06:29.140 Run time: 1 seconds 00:06:29.140 Verify: Yes 00:06:29.140 00:06:29.140 Running for 1 seconds... 00:06:29.140 00:06:29.140 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:29.140 ------------------------------------------------------------------------------------ 00:06:29.140 0,0 360672/s 1408 MiB/s 0 0 00:06:29.140 ==================================================================================== 00:06:29.140 Total 360672/s 1408 MiB/s 0 0' 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:29.140 07:58:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:29.140 07:58:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:29.140 07:58:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.140 07:58:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.140 07:58:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.140 07:58:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.140 07:58:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.140 07:58:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.140 07:58:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.140 07:58:59 -- accel/accel.sh@42 -- # jq -r . 00:06:29.140 [2024-06-11 07:58:59.404815] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:29.140 [2024-06-11 07:58:59.404886] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid850518 ] 00:06:29.140 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.140 [2024-06-11 07:58:59.465761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.140 [2024-06-11 07:58:59.526384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.140 07:58:59 -- accel/accel.sh@21 -- # val= 00:06:29.140 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:29.140 07:58:59 -- accel/accel.sh@21 -- # val= 00:06:29.140 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:29.140 07:58:59 -- accel/accel.sh@21 -- # val=0x1 00:06:29.140 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:29.140 07:58:59 -- accel/accel.sh@21 -- # val= 00:06:29.140 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:29.140 07:58:59 -- accel/accel.sh@21 -- # val= 00:06:29.140 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:29.140 07:58:59 -- accel/accel.sh@21 -- # val=xor 00:06:29.140 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.140 07:58:59 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:29.140 07:58:59 -- accel/accel.sh@21 -- # val=2 00:06:29.140 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:29.140 07:58:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:29.140 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:29.140 07:58:59 -- accel/accel.sh@21 -- # val= 00:06:29.140 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:29.140 07:58:59 -- accel/accel.sh@21 -- # val=software 00:06:29.140 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.140 07:58:59 -- accel/accel.sh@23 -- # accel_module=software 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:29.140 07:58:59 -- accel/accel.sh@21 -- # val=32 00:06:29.140 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:29.140 07:58:59 -- accel/accel.sh@21 -- # val=32 00:06:29.140 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:29.140 07:58:59 -- accel/accel.sh@21 -- # val=1 00:06:29.140 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:29.140 07:58:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:29.140 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:29.140 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:29.140 07:58:59 -- accel/accel.sh@21 -- # val=Yes 00:06:29.141 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.141 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:29.141 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:29.141 07:58:59 -- accel/accel.sh@21 -- # val= 00:06:29.141 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.141 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:29.141 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:29.141 07:58:59 -- accel/accel.sh@21 -- # val= 00:06:29.141 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.141 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:29.141 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:30.083 07:59:00 -- accel/accel.sh@21 -- # val= 00:06:30.083 07:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.083 07:59:00 -- accel/accel.sh@20 -- # IFS=: 00:06:30.083 07:59:00 -- accel/accel.sh@20 -- # read -r var val 00:06:30.083 07:59:00 -- accel/accel.sh@21 -- # val= 00:06:30.083 07:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.083 07:59:00 -- accel/accel.sh@20 -- # IFS=: 00:06:30.083 07:59:00 -- accel/accel.sh@20 -- # read -r var val 00:06:30.083 07:59:00 -- accel/accel.sh@21 -- # val= 00:06:30.083 07:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.083 07:59:00 -- accel/accel.sh@20 -- # IFS=: 00:06:30.083 07:59:00 -- accel/accel.sh@20 -- # read -r var val 00:06:30.083 07:59:00 -- accel/accel.sh@21 -- # val= 00:06:30.083 07:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.083 07:59:00 -- accel/accel.sh@20 -- # IFS=: 00:06:30.083 07:59:00 -- accel/accel.sh@20 -- # read -r var val 00:06:30.083 07:59:00 -- accel/accel.sh@21 -- # val= 00:06:30.083 07:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.083 07:59:00 -- accel/accel.sh@20 -- # IFS=: 00:06:30.083 07:59:00 -- accel/accel.sh@20 -- # read -r var val 00:06:30.083 07:59:00 -- accel/accel.sh@21 -- # val= 00:06:30.083 07:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.083 07:59:00 -- accel/accel.sh@20 -- # IFS=: 00:06:30.083 07:59:00 -- accel/accel.sh@20 -- # read -r var val 00:06:30.083 07:59:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:30.083 07:59:00 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:30.083 07:59:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.083 00:06:30.083 real 0m2.554s 00:06:30.083 user 0m2.366s 00:06:30.083 sys 0m0.193s 00:06:30.083 07:59:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.083 07:59:00 -- common/autotest_common.sh@10 -- # set +x 00:06:30.083 ************************************ 00:06:30.083 END TEST accel_xor 00:06:30.083 ************************************ 00:06:30.083 07:59:00 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:30.083 07:59:00 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:30.083 07:59:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.083 07:59:00 -- common/autotest_common.sh@10 -- # set +x 00:06:30.083 ************************************ 00:06:30.083 START TEST accel_xor 00:06:30.083 ************************************ 00:06:30.083 07:59:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:06:30.083 07:59:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.083 07:59:00 -- accel/accel.sh@17 -- # local accel_module 00:06:30.083 07:59:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:30.083 07:59:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:30.083 07:59:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.083 07:59:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.083 07:59:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.083 07:59:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.083 07:59:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.083 07:59:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.083 07:59:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.083 07:59:00 -- accel/accel.sh@42 -- # jq -r . 00:06:30.083 [2024-06-11 07:59:00.728360] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:30.083 [2024-06-11 07:59:00.728471] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid850824 ] 00:06:30.344 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.344 [2024-06-11 07:59:00.791981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.344 [2024-06-11 07:59:00.856755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.725 07:59:01 -- accel/accel.sh@18 -- # out=' 00:06:31.725 SPDK Configuration: 00:06:31.725 Core mask: 0x1 00:06:31.725 00:06:31.725 Accel Perf Configuration: 00:06:31.725 Workload Type: xor 00:06:31.725 Source buffers: 3 00:06:31.725 Transfer size: 4096 bytes 00:06:31.725 Vector count 1 00:06:31.725 Module: software 00:06:31.725 Queue depth: 32 00:06:31.725 Allocate depth: 32 00:06:31.725 # threads/core: 1 00:06:31.725 Run time: 1 seconds 00:06:31.725 Verify: Yes 00:06:31.725 00:06:31.725 Running for 1 seconds... 00:06:31.725 00:06:31.725 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:31.725 ------------------------------------------------------------------------------------ 00:06:31.725 0,0 342944/s 1339 MiB/s 0 0 00:06:31.725 ==================================================================================== 00:06:31.725 Total 342944/s 1339 MiB/s 0 0' 00:06:31.725 07:59:01 -- accel/accel.sh@20 -- # IFS=: 00:06:31.725 07:59:01 -- accel/accel.sh@20 -- # read -r var val 00:06:31.725 07:59:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:31.725 07:59:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:31.725 07:59:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.726 07:59:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.726 07:59:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.726 07:59:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.726 07:59:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.726 07:59:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.726 07:59:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.726 07:59:01 -- accel/accel.sh@42 -- # jq -r . 00:06:31.726 [2024-06-11 07:59:02.007545] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:31.726 [2024-06-11 07:59:02.007616] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid850958 ] 00:06:31.726 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.726 [2024-06-11 07:59:02.068659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.726 [2024-06-11 07:59:02.130284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.726 07:59:02 -- accel/accel.sh@21 -- # val= 00:06:31.726 07:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:31.726 07:59:02 -- accel/accel.sh@21 -- # val= 00:06:31.726 07:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:31.726 07:59:02 -- accel/accel.sh@21 -- # val=0x1 00:06:31.726 07:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:31.726 07:59:02 -- accel/accel.sh@21 -- # val= 00:06:31.726 07:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:31.726 07:59:02 -- accel/accel.sh@21 -- # val= 00:06:31.726 07:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:31.726 07:59:02 -- accel/accel.sh@21 -- # val=xor 00:06:31.726 07:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.726 07:59:02 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:31.726 07:59:02 -- accel/accel.sh@21 -- # val=3 00:06:31.726 07:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:31.726 07:59:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:31.726 07:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:31.726 07:59:02 -- accel/accel.sh@21 -- # val= 00:06:31.726 07:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:31.726 07:59:02 -- accel/accel.sh@21 -- # val=software 00:06:31.726 07:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.726 07:59:02 -- accel/accel.sh@23 -- # accel_module=software 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:31.726 07:59:02 -- accel/accel.sh@21 -- # val=32 00:06:31.726 07:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:31.726 07:59:02 -- accel/accel.sh@21 -- # val=32 00:06:31.726 07:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:31.726 07:59:02 -- accel/accel.sh@21 -- # val=1 00:06:31.726 07:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:31.726 07:59:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:31.726 07:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:31.726 07:59:02 -- accel/accel.sh@21 -- # val=Yes 00:06:31.726 07:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:31.726 07:59:02 -- accel/accel.sh@21 -- # val= 00:06:31.726 07:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:31.726 07:59:02 -- accel/accel.sh@21 -- # val= 00:06:31.726 07:59:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # IFS=: 00:06:31.726 07:59:02 -- accel/accel.sh@20 -- # read -r var val 00:06:32.668 07:59:03 -- accel/accel.sh@21 -- # val= 00:06:32.668 07:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.668 07:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.668 07:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:32.668 07:59:03 -- accel/accel.sh@21 -- # val= 00:06:32.668 07:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.668 07:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.668 07:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:32.668 07:59:03 -- accel/accel.sh@21 -- # val= 00:06:32.668 07:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.668 07:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.668 07:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:32.668 07:59:03 -- accel/accel.sh@21 -- # val= 00:06:32.668 07:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.668 07:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.668 07:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:32.668 07:59:03 -- accel/accel.sh@21 -- # val= 00:06:32.668 07:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.668 07:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.668 07:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:32.668 07:59:03 -- accel/accel.sh@21 -- # val= 00:06:32.668 07:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.668 07:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:32.668 07:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:32.668 07:59:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:32.668 07:59:03 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:32.668 07:59:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.668 00:06:32.668 real 0m2.559s 00:06:32.668 user 0m2.369s 00:06:32.668 sys 0m0.197s 00:06:32.668 07:59:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.668 07:59:03 -- common/autotest_common.sh@10 -- # set +x 00:06:32.668 ************************************ 00:06:32.668 END TEST accel_xor 00:06:32.668 ************************************ 00:06:32.668 07:59:03 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:32.668 07:59:03 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:32.668 07:59:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.668 07:59:03 -- common/autotest_common.sh@10 -- # set +x 00:06:32.668 ************************************ 00:06:32.668 START TEST accel_dif_verify 00:06:32.668 ************************************ 00:06:32.668 07:59:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:06:32.668 07:59:03 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.668 07:59:03 -- accel/accel.sh@17 -- # local accel_module 00:06:32.668 07:59:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:32.668 07:59:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:32.668 07:59:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.668 07:59:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.668 07:59:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.668 07:59:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.668 07:59:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.668 07:59:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.668 07:59:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.668 07:59:03 -- accel/accel.sh@42 -- # jq -r . 00:06:32.929 [2024-06-11 07:59:03.328754] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:32.929 [2024-06-11 07:59:03.328824] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid851244 ] 00:06:32.929 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.929 [2024-06-11 07:59:03.389432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.929 [2024-06-11 07:59:03.450324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.312 07:59:04 -- accel/accel.sh@18 -- # out=' 00:06:34.312 SPDK Configuration: 00:06:34.312 Core mask: 0x1 00:06:34.312 00:06:34.312 Accel Perf Configuration: 00:06:34.312 Workload Type: dif_verify 00:06:34.312 Vector size: 4096 bytes 00:06:34.312 Transfer size: 4096 bytes 00:06:34.312 Block size: 512 bytes 00:06:34.312 Metadata size: 8 bytes 00:06:34.312 Vector count 1 00:06:34.312 Module: software 00:06:34.312 Queue depth: 32 00:06:34.312 Allocate depth: 32 00:06:34.312 # threads/core: 1 00:06:34.312 Run time: 1 seconds 00:06:34.312 Verify: No 00:06:34.312 00:06:34.312 Running for 1 seconds... 00:06:34.312 00:06:34.312 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:34.312 ------------------------------------------------------------------------------------ 00:06:34.312 0,0 94848/s 376 MiB/s 0 0 00:06:34.312 ==================================================================================== 00:06:34.312 Total 94848/s 370 MiB/s 0 0' 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.312 07:59:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:34.312 07:59:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:34.312 07:59:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.312 07:59:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.312 07:59:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.312 07:59:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.312 07:59:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.312 07:59:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.312 07:59:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.312 07:59:04 -- accel/accel.sh@42 -- # jq -r . 00:06:34.312 [2024-06-11 07:59:04.603611] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:34.312 [2024-06-11 07:59:04.603718] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid851578 ] 00:06:34.312 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.312 [2024-06-11 07:59:04.664999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.312 [2024-06-11 07:59:04.726540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.312 07:59:04 -- accel/accel.sh@21 -- # val= 00:06:34.312 07:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.312 07:59:04 -- accel/accel.sh@21 -- # val= 00:06:34.312 07:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.312 07:59:04 -- accel/accel.sh@21 -- # val=0x1 00:06:34.312 07:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.312 07:59:04 -- accel/accel.sh@21 -- # val= 00:06:34.312 07:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.312 07:59:04 -- accel/accel.sh@21 -- # val= 00:06:34.312 07:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.312 07:59:04 -- accel/accel.sh@21 -- # val=dif_verify 00:06:34.312 07:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.312 07:59:04 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.312 07:59:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.312 07:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.312 07:59:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.312 07:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.312 07:59:04 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:34.312 07:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.312 07:59:04 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:34.312 07:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.312 07:59:04 -- accel/accel.sh@21 -- # val= 00:06:34.312 07:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.312 07:59:04 -- accel/accel.sh@21 -- # val=software 00:06:34.312 07:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.312 07:59:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.312 07:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.313 07:59:04 -- accel/accel.sh@21 -- # val=32 00:06:34.313 07:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.313 07:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.313 07:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.313 07:59:04 -- accel/accel.sh@21 -- # val=32 00:06:34.313 07:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.313 07:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.313 07:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.313 07:59:04 -- accel/accel.sh@21 -- # val=1 00:06:34.313 07:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.313 07:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.313 07:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.313 07:59:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.313 07:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.313 07:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.313 07:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.313 07:59:04 -- accel/accel.sh@21 -- # val=No 00:06:34.313 07:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.313 07:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.313 07:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.313 07:59:04 -- accel/accel.sh@21 -- # val= 00:06:34.313 07:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.313 07:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.313 07:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:34.313 07:59:04 -- accel/accel.sh@21 -- # val= 00:06:34.313 07:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.313 07:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:34.313 07:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:35.254 07:59:05 -- accel/accel.sh@21 -- # val= 00:06:35.254 07:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.254 07:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:35.254 07:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:35.254 07:59:05 -- accel/accel.sh@21 -- # val= 00:06:35.254 07:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.254 07:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:35.254 07:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:35.254 07:59:05 -- accel/accel.sh@21 -- # val= 00:06:35.254 07:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.254 07:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:35.254 07:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:35.254 07:59:05 -- accel/accel.sh@21 -- # val= 00:06:35.254 07:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.254 07:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:35.254 07:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:35.254 07:59:05 -- accel/accel.sh@21 -- # val= 00:06:35.254 07:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.254 07:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:35.254 07:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:35.254 07:59:05 -- accel/accel.sh@21 -- # val= 00:06:35.254 07:59:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.254 07:59:05 -- accel/accel.sh@20 -- # IFS=: 00:06:35.254 07:59:05 -- accel/accel.sh@20 -- # read -r var val 00:06:35.254 07:59:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:35.254 07:59:05 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:35.254 07:59:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.254 00:06:35.254 real 0m2.554s 00:06:35.254 user 0m2.364s 00:06:35.254 sys 0m0.196s 00:06:35.254 07:59:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.254 07:59:05 -- common/autotest_common.sh@10 -- # set +x 00:06:35.254 ************************************ 00:06:35.254 END TEST accel_dif_verify 00:06:35.254 ************************************ 00:06:35.254 07:59:05 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:35.254 07:59:05 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:35.254 07:59:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.254 07:59:05 -- common/autotest_common.sh@10 -- # set +x 00:06:35.254 ************************************ 00:06:35.254 START TEST accel_dif_generate 00:06:35.254 ************************************ 00:06:35.254 07:59:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:06:35.254 07:59:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.254 07:59:05 -- accel/accel.sh@17 -- # local accel_module 00:06:35.514 07:59:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:35.514 07:59:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:35.514 07:59:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.514 07:59:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.514 07:59:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.514 07:59:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.514 07:59:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.514 07:59:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.514 07:59:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.514 07:59:05 -- accel/accel.sh@42 -- # jq -r . 00:06:35.515 [2024-06-11 07:59:05.925278] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:35.515 [2024-06-11 07:59:05.925351] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid851935 ] 00:06:35.515 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.515 [2024-06-11 07:59:05.985891] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.515 [2024-06-11 07:59:06.047390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.898 07:59:07 -- accel/accel.sh@18 -- # out=' 00:06:36.898 SPDK Configuration: 00:06:36.898 Core mask: 0x1 00:06:36.898 00:06:36.898 Accel Perf Configuration: 00:06:36.898 Workload Type: dif_generate 00:06:36.898 Vector size: 4096 bytes 00:06:36.898 Transfer size: 4096 bytes 00:06:36.898 Block size: 512 bytes 00:06:36.898 Metadata size: 8 bytes 00:06:36.898 Vector count 1 00:06:36.898 Module: software 00:06:36.898 Queue depth: 32 00:06:36.898 Allocate depth: 32 00:06:36.898 # threads/core: 1 00:06:36.898 Run time: 1 seconds 00:06:36.898 Verify: No 00:06:36.898 00:06:36.898 Running for 1 seconds... 00:06:36.898 00:06:36.898 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.898 ------------------------------------------------------------------------------------ 00:06:36.898 0,0 113536/s 450 MiB/s 0 0 00:06:36.898 ==================================================================================== 00:06:36.898 Total 113536/s 443 MiB/s 0 0' 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.898 07:59:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:36.898 07:59:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:36.898 07:59:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.898 07:59:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.898 07:59:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.898 07:59:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.898 07:59:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.898 07:59:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.898 07:59:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.898 07:59:07 -- accel/accel.sh@42 -- # jq -r . 00:06:36.898 [2024-06-11 07:59:07.198547] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:36.898 [2024-06-11 07:59:07.198634] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid852114 ] 00:06:36.898 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.898 [2024-06-11 07:59:07.260845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.898 [2024-06-11 07:59:07.323413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.898 07:59:07 -- accel/accel.sh@21 -- # val= 00:06:36.898 07:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.898 07:59:07 -- accel/accel.sh@21 -- # val= 00:06:36.898 07:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.898 07:59:07 -- accel/accel.sh@21 -- # val=0x1 00:06:36.898 07:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.898 07:59:07 -- accel/accel.sh@21 -- # val= 00:06:36.898 07:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.898 07:59:07 -- accel/accel.sh@21 -- # val= 00:06:36.898 07:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.898 07:59:07 -- accel/accel.sh@21 -- # val=dif_generate 00:06:36.898 07:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.898 07:59:07 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.898 07:59:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.898 07:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.898 07:59:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.898 07:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.898 07:59:07 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:36.898 07:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.898 07:59:07 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:36.898 07:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.898 07:59:07 -- accel/accel.sh@21 -- # val= 00:06:36.898 07:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.898 07:59:07 -- accel/accel.sh@21 -- # val=software 00:06:36.898 07:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.898 07:59:07 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.898 07:59:07 -- accel/accel.sh@21 -- # val=32 00:06:36.898 07:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.898 07:59:07 -- accel/accel.sh@21 -- # val=32 00:06:36.898 07:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.898 07:59:07 -- accel/accel.sh@21 -- # val=1 00:06:36.898 07:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.898 07:59:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.898 07:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.898 07:59:07 -- accel/accel.sh@21 -- # val=No 00:06:36.898 07:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.898 07:59:07 -- accel/accel.sh@21 -- # val= 00:06:36.898 07:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:36.898 07:59:07 -- accel/accel.sh@21 -- # val= 00:06:36.898 07:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # IFS=: 00:06:36.898 07:59:07 -- accel/accel.sh@20 -- # read -r var val 00:06:37.840 07:59:08 -- accel/accel.sh@21 -- # val= 00:06:37.840 07:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.840 07:59:08 -- accel/accel.sh@20 -- # IFS=: 00:06:37.840 07:59:08 -- accel/accel.sh@20 -- # read -r var val 00:06:37.840 07:59:08 -- accel/accel.sh@21 -- # val= 00:06:37.840 07:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.840 07:59:08 -- accel/accel.sh@20 -- # IFS=: 00:06:37.840 07:59:08 -- accel/accel.sh@20 -- # read -r var val 00:06:37.840 07:59:08 -- accel/accel.sh@21 -- # val= 00:06:37.840 07:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.840 07:59:08 -- accel/accel.sh@20 -- # IFS=: 00:06:37.840 07:59:08 -- accel/accel.sh@20 -- # read -r var val 00:06:37.840 07:59:08 -- accel/accel.sh@21 -- # val= 00:06:37.840 07:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.840 07:59:08 -- accel/accel.sh@20 -- # IFS=: 00:06:37.840 07:59:08 -- accel/accel.sh@20 -- # read -r var val 00:06:37.840 07:59:08 -- accel/accel.sh@21 -- # val= 00:06:37.840 07:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.840 07:59:08 -- accel/accel.sh@20 -- # IFS=: 00:06:37.840 07:59:08 -- accel/accel.sh@20 -- # read -r var val 00:06:37.840 07:59:08 -- accel/accel.sh@21 -- # val= 00:06:37.840 07:59:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.840 07:59:08 -- accel/accel.sh@20 -- # IFS=: 00:06:37.840 07:59:08 -- accel/accel.sh@20 -- # read -r var val 00:06:37.840 07:59:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:37.840 07:59:08 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:37.840 07:59:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.840 00:06:37.840 real 0m2.554s 00:06:37.840 user 0m2.363s 00:06:37.840 sys 0m0.198s 00:06:37.840 07:59:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.840 07:59:08 -- common/autotest_common.sh@10 -- # set +x 00:06:37.840 ************************************ 00:06:37.840 END TEST accel_dif_generate 00:06:37.840 ************************************ 00:06:38.101 07:59:08 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:38.101 07:59:08 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:38.101 07:59:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.101 07:59:08 -- common/autotest_common.sh@10 -- # set +x 00:06:38.101 ************************************ 00:06:38.101 START TEST accel_dif_generate_copy 00:06:38.101 ************************************ 00:06:38.101 07:59:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:06:38.101 07:59:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.101 07:59:08 -- accel/accel.sh@17 -- # local accel_module 00:06:38.101 07:59:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:38.101 07:59:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:38.101 07:59:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.101 07:59:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.101 07:59:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.101 07:59:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.101 07:59:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.101 07:59:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.101 07:59:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.101 07:59:08 -- accel/accel.sh@42 -- # jq -r . 00:06:38.101 [2024-06-11 07:59:08.522934] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:38.101 [2024-06-11 07:59:08.523046] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid852308 ] 00:06:38.101 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.101 [2024-06-11 07:59:08.586168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.101 [2024-06-11 07:59:08.651089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.483 07:59:09 -- accel/accel.sh@18 -- # out=' 00:06:39.483 SPDK Configuration: 00:06:39.483 Core mask: 0x1 00:06:39.483 00:06:39.483 Accel Perf Configuration: 00:06:39.483 Workload Type: dif_generate_copy 00:06:39.483 Vector size: 4096 bytes 00:06:39.483 Transfer size: 4096 bytes 00:06:39.483 Vector count 1 00:06:39.483 Module: software 00:06:39.483 Queue depth: 32 00:06:39.483 Allocate depth: 32 00:06:39.483 # threads/core: 1 00:06:39.483 Run time: 1 seconds 00:06:39.483 Verify: No 00:06:39.483 00:06:39.483 Running for 1 seconds... 00:06:39.483 00:06:39.483 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:39.483 ------------------------------------------------------------------------------------ 00:06:39.483 0,0 86592/s 343 MiB/s 0 0 00:06:39.483 ==================================================================================== 00:06:39.483 Total 86592/s 338 MiB/s 0 0' 00:06:39.483 07:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:39.483 07:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:39.483 07:59:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:39.483 07:59:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:39.483 07:59:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.483 07:59:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.483 07:59:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.483 07:59:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.483 07:59:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.483 07:59:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.483 07:59:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.483 07:59:09 -- accel/accel.sh@42 -- # jq -r . 00:06:39.483 [2024-06-11 07:59:09.802833] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:39.483 [2024-06-11 07:59:09.802902] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid852646 ] 00:06:39.483 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.483 [2024-06-11 07:59:09.863304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.483 [2024-06-11 07:59:09.924033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.483 07:59:09 -- accel/accel.sh@21 -- # val= 00:06:39.483 07:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.483 07:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:39.483 07:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:39.483 07:59:09 -- accel/accel.sh@21 -- # val= 00:06:39.483 07:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.483 07:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:39.484 07:59:09 -- accel/accel.sh@21 -- # val=0x1 00:06:39.484 07:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:39.484 07:59:09 -- accel/accel.sh@21 -- # val= 00:06:39.484 07:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:39.484 07:59:09 -- accel/accel.sh@21 -- # val= 00:06:39.484 07:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:39.484 07:59:09 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:39.484 07:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.484 07:59:09 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:39.484 07:59:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:39.484 07:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:39.484 07:59:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:39.484 07:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:39.484 07:59:09 -- accel/accel.sh@21 -- # val= 00:06:39.484 07:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:39.484 07:59:09 -- accel/accel.sh@21 -- # val=software 00:06:39.484 07:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.484 07:59:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:39.484 07:59:09 -- accel/accel.sh@21 -- # val=32 00:06:39.484 07:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:39.484 07:59:09 -- accel/accel.sh@21 -- # val=32 00:06:39.484 07:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:39.484 07:59:09 -- accel/accel.sh@21 -- # val=1 00:06:39.484 07:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:39.484 07:59:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:39.484 07:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:39.484 07:59:09 -- accel/accel.sh@21 -- # val=No 00:06:39.484 07:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:39.484 07:59:09 -- accel/accel.sh@21 -- # val= 00:06:39.484 07:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:39.484 07:59:09 -- accel/accel.sh@21 -- # val= 00:06:39.484 07:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # IFS=: 00:06:39.484 07:59:09 -- accel/accel.sh@20 -- # read -r var val 00:06:40.425 07:59:11 -- accel/accel.sh@21 -- # val= 00:06:40.425 07:59:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.425 07:59:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.425 07:59:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.425 07:59:11 -- accel/accel.sh@21 -- # val= 00:06:40.425 07:59:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.425 07:59:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.425 07:59:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.425 07:59:11 -- accel/accel.sh@21 -- # val= 00:06:40.425 07:59:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.425 07:59:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.425 07:59:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.425 07:59:11 -- accel/accel.sh@21 -- # val= 00:06:40.425 07:59:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.425 07:59:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.425 07:59:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.425 07:59:11 -- accel/accel.sh@21 -- # val= 00:06:40.425 07:59:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.425 07:59:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.425 07:59:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.425 07:59:11 -- accel/accel.sh@21 -- # val= 00:06:40.425 07:59:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.425 07:59:11 -- accel/accel.sh@20 -- # IFS=: 00:06:40.425 07:59:11 -- accel/accel.sh@20 -- # read -r var val 00:06:40.425 07:59:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:40.425 07:59:11 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:40.425 07:59:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.425 00:06:40.425 real 0m2.559s 00:06:40.425 user 0m2.366s 00:06:40.425 sys 0m0.199s 00:06:40.425 07:59:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.425 07:59:11 -- common/autotest_common.sh@10 -- # set +x 00:06:40.425 ************************************ 00:06:40.425 END TEST accel_dif_generate_copy 00:06:40.425 ************************************ 00:06:40.686 07:59:11 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:40.686 07:59:11 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.686 07:59:11 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:40.686 07:59:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.686 07:59:11 -- common/autotest_common.sh@10 -- # set +x 00:06:40.686 ************************************ 00:06:40.686 START TEST accel_comp 00:06:40.686 ************************************ 00:06:40.686 07:59:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.686 07:59:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.686 07:59:11 -- accel/accel.sh@17 -- # local accel_module 00:06:40.686 07:59:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.686 07:59:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.686 07:59:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.686 07:59:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.686 07:59:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.686 07:59:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.686 07:59:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.686 07:59:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.686 07:59:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.686 07:59:11 -- accel/accel.sh@42 -- # jq -r . 00:06:40.686 [2024-06-11 07:59:11.124992] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:40.686 [2024-06-11 07:59:11.125084] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid852997 ] 00:06:40.686 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.686 [2024-06-11 07:59:11.188011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.686 [2024-06-11 07:59:11.248383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.066 07:59:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:42.066 00:06:42.066 SPDK Configuration: 00:06:42.066 Core mask: 0x1 00:06:42.066 00:06:42.066 Accel Perf Configuration: 00:06:42.066 Workload Type: compress 00:06:42.066 Transfer size: 4096 bytes 00:06:42.066 Vector count 1 00:06:42.066 Module: software 00:06:42.066 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:42.066 Queue depth: 32 00:06:42.066 Allocate depth: 32 00:06:42.066 # threads/core: 1 00:06:42.066 Run time: 1 seconds 00:06:42.066 Verify: No 00:06:42.066 00:06:42.066 Running for 1 seconds... 00:06:42.066 00:06:42.066 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.066 ------------------------------------------------------------------------------------ 00:06:42.066 0,0 47648/s 198 MiB/s 0 0 00:06:42.066 ==================================================================================== 00:06:42.066 Total 47648/s 186 MiB/s 0 0' 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.066 07:59:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:42.066 07:59:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:42.066 07:59:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.066 07:59:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.066 07:59:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.066 07:59:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.066 07:59:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.066 07:59:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.066 07:59:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.066 07:59:12 -- accel/accel.sh@42 -- # jq -r . 00:06:42.066 [2024-06-11 07:59:12.402608] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:42.066 [2024-06-11 07:59:12.402700] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid853266 ] 00:06:42.066 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.066 [2024-06-11 07:59:12.472542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.066 [2024-06-11 07:59:12.534951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.066 07:59:12 -- accel/accel.sh@21 -- # val= 00:06:42.066 07:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.066 07:59:12 -- accel/accel.sh@21 -- # val= 00:06:42.066 07:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.066 07:59:12 -- accel/accel.sh@21 -- # val= 00:06:42.066 07:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.066 07:59:12 -- accel/accel.sh@21 -- # val=0x1 00:06:42.066 07:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.066 07:59:12 -- accel/accel.sh@21 -- # val= 00:06:42.066 07:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.066 07:59:12 -- accel/accel.sh@21 -- # val= 00:06:42.066 07:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.066 07:59:12 -- accel/accel.sh@21 -- # val=compress 00:06:42.066 07:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.066 07:59:12 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.066 07:59:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.066 07:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.066 07:59:12 -- accel/accel.sh@21 -- # val= 00:06:42.066 07:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.066 07:59:12 -- accel/accel.sh@21 -- # val=software 00:06:42.066 07:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.066 07:59:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.066 07:59:12 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:42.066 07:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.066 07:59:12 -- accel/accel.sh@21 -- # val=32 00:06:42.066 07:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.066 07:59:12 -- accel/accel.sh@21 -- # val=32 00:06:42.066 07:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.066 07:59:12 -- accel/accel.sh@21 -- # val=1 00:06:42.066 07:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.066 07:59:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.066 07:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.066 07:59:12 -- accel/accel.sh@21 -- # val=No 00:06:42.066 07:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.066 07:59:12 -- accel/accel.sh@21 -- # val= 00:06:42.066 07:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # read -r var val 00:06:42.066 07:59:12 -- accel/accel.sh@21 -- # val= 00:06:42.066 07:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # IFS=: 00:06:42.066 07:59:12 -- accel/accel.sh@20 -- # read -r var val 00:06:43.451 07:59:13 -- accel/accel.sh@21 -- # val= 00:06:43.451 07:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.451 07:59:13 -- accel/accel.sh@20 -- # IFS=: 00:06:43.451 07:59:13 -- accel/accel.sh@20 -- # read -r var val 00:06:43.451 07:59:13 -- accel/accel.sh@21 -- # val= 00:06:43.451 07:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.451 07:59:13 -- accel/accel.sh@20 -- # IFS=: 00:06:43.451 07:59:13 -- accel/accel.sh@20 -- # read -r var val 00:06:43.451 07:59:13 -- accel/accel.sh@21 -- # val= 00:06:43.451 07:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.451 07:59:13 -- accel/accel.sh@20 -- # IFS=: 00:06:43.451 07:59:13 -- accel/accel.sh@20 -- # read -r var val 00:06:43.451 07:59:13 -- accel/accel.sh@21 -- # val= 00:06:43.451 07:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.451 07:59:13 -- accel/accel.sh@20 -- # IFS=: 00:06:43.451 07:59:13 -- accel/accel.sh@20 -- # read -r var val 00:06:43.451 07:59:13 -- accel/accel.sh@21 -- # val= 00:06:43.451 07:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.451 07:59:13 -- accel/accel.sh@20 -- # IFS=: 00:06:43.451 07:59:13 -- accel/accel.sh@20 -- # read -r var val 00:06:43.451 07:59:13 -- accel/accel.sh@21 -- # val= 00:06:43.452 07:59:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.452 07:59:13 -- accel/accel.sh@20 -- # IFS=: 00:06:43.452 07:59:13 -- accel/accel.sh@20 -- # read -r var val 00:06:43.452 07:59:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:43.452 07:59:13 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:43.452 07:59:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.452 00:06:43.452 real 0m2.570s 00:06:43.452 user 0m2.357s 00:06:43.452 sys 0m0.221s 00:06:43.452 07:59:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.452 07:59:13 -- common/autotest_common.sh@10 -- # set +x 00:06:43.452 ************************************ 00:06:43.452 END TEST accel_comp 00:06:43.452 ************************************ 00:06:43.452 07:59:13 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:43.452 07:59:13 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:43.452 07:59:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.452 07:59:13 -- common/autotest_common.sh@10 -- # set +x 00:06:43.452 ************************************ 00:06:43.452 START TEST accel_decomp 00:06:43.452 ************************************ 00:06:43.452 07:59:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:43.452 07:59:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.452 07:59:13 -- accel/accel.sh@17 -- # local accel_module 00:06:43.452 07:59:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:43.452 07:59:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:43.452 07:59:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.452 07:59:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.452 07:59:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.452 07:59:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.452 07:59:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.452 07:59:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.452 07:59:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.452 07:59:13 -- accel/accel.sh@42 -- # jq -r . 00:06:43.452 [2024-06-11 07:59:13.737001] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:43.452 [2024-06-11 07:59:13.737085] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid853434 ] 00:06:43.452 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.452 [2024-06-11 07:59:13.799114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.452 [2024-06-11 07:59:13.863724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.392 07:59:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:44.392 00:06:44.392 SPDK Configuration: 00:06:44.392 Core mask: 0x1 00:06:44.392 00:06:44.392 Accel Perf Configuration: 00:06:44.392 Workload Type: decompress 00:06:44.392 Transfer size: 4096 bytes 00:06:44.392 Vector count 1 00:06:44.392 Module: software 00:06:44.392 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.392 Queue depth: 32 00:06:44.392 Allocate depth: 32 00:06:44.392 # threads/core: 1 00:06:44.392 Run time: 1 seconds 00:06:44.392 Verify: Yes 00:06:44.392 00:06:44.392 Running for 1 seconds... 00:06:44.392 00:06:44.392 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:44.392 ------------------------------------------------------------------------------------ 00:06:44.392 0,0 63168/s 116 MiB/s 0 0 00:06:44.392 ==================================================================================== 00:06:44.392 Total 63168/s 246 MiB/s 0 0' 00:06:44.392 07:59:14 -- accel/accel.sh@20 -- # IFS=: 00:06:44.392 07:59:14 -- accel/accel.sh@20 -- # read -r var val 00:06:44.392 07:59:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:44.392 07:59:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:44.392 07:59:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.392 07:59:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.392 07:59:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.393 07:59:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.393 07:59:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.393 07:59:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.393 07:59:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.393 07:59:14 -- accel/accel.sh@42 -- # jq -r . 00:06:44.393 [2024-06-11 07:59:15.018350] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:44.393 [2024-06-11 07:59:15.018469] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid853705 ] 00:06:44.653 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.653 [2024-06-11 07:59:15.079563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.653 [2024-06-11 07:59:15.141453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.653 07:59:15 -- accel/accel.sh@21 -- # val= 00:06:44.653 07:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.653 07:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.653 07:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.653 07:59:15 -- accel/accel.sh@21 -- # val= 00:06:44.653 07:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.653 07:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.653 07:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.653 07:59:15 -- accel/accel.sh@21 -- # val= 00:06:44.653 07:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.653 07:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.653 07:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.653 07:59:15 -- accel/accel.sh@21 -- # val=0x1 00:06:44.653 07:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.653 07:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.653 07:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.653 07:59:15 -- accel/accel.sh@21 -- # val= 00:06:44.654 07:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.654 07:59:15 -- accel/accel.sh@21 -- # val= 00:06:44.654 07:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.654 07:59:15 -- accel/accel.sh@21 -- # val=decompress 00:06:44.654 07:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.654 07:59:15 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.654 07:59:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:44.654 07:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.654 07:59:15 -- accel/accel.sh@21 -- # val= 00:06:44.654 07:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.654 07:59:15 -- accel/accel.sh@21 -- # val=software 00:06:44.654 07:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.654 07:59:15 -- accel/accel.sh@23 -- # accel_module=software 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.654 07:59:15 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:44.654 07:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.654 07:59:15 -- accel/accel.sh@21 -- # val=32 00:06:44.654 07:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.654 07:59:15 -- accel/accel.sh@21 -- # val=32 00:06:44.654 07:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.654 07:59:15 -- accel/accel.sh@21 -- # val=1 00:06:44.654 07:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.654 07:59:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:44.654 07:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.654 07:59:15 -- accel/accel.sh@21 -- # val=Yes 00:06:44.654 07:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.654 07:59:15 -- accel/accel.sh@21 -- # val= 00:06:44.654 07:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:44.654 07:59:15 -- accel/accel.sh@21 -- # val= 00:06:44.654 07:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:44.654 07:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:46.038 07:59:16 -- accel/accel.sh@21 -- # val= 00:06:46.038 07:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.038 07:59:16 -- accel/accel.sh@20 -- # IFS=: 00:06:46.038 07:59:16 -- accel/accel.sh@20 -- # read -r var val 00:06:46.038 07:59:16 -- accel/accel.sh@21 -- # val= 00:06:46.038 07:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.038 07:59:16 -- accel/accel.sh@20 -- # IFS=: 00:06:46.038 07:59:16 -- accel/accel.sh@20 -- # read -r var val 00:06:46.038 07:59:16 -- accel/accel.sh@21 -- # val= 00:06:46.038 07:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.038 07:59:16 -- accel/accel.sh@20 -- # IFS=: 00:06:46.038 07:59:16 -- accel/accel.sh@20 -- # read -r var val 00:06:46.038 07:59:16 -- accel/accel.sh@21 -- # val= 00:06:46.038 07:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.038 07:59:16 -- accel/accel.sh@20 -- # IFS=: 00:06:46.038 07:59:16 -- accel/accel.sh@20 -- # read -r var val 00:06:46.038 07:59:16 -- accel/accel.sh@21 -- # val= 00:06:46.038 07:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.038 07:59:16 -- accel/accel.sh@20 -- # IFS=: 00:06:46.038 07:59:16 -- accel/accel.sh@20 -- # read -r var val 00:06:46.038 07:59:16 -- accel/accel.sh@21 -- # val= 00:06:46.038 07:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.038 07:59:16 -- accel/accel.sh@20 -- # IFS=: 00:06:46.038 07:59:16 -- accel/accel.sh@20 -- # read -r var val 00:06:46.038 07:59:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.038 07:59:16 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:46.038 07:59:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.038 00:06:46.038 real 0m2.563s 00:06:46.038 user 0m2.377s 00:06:46.038 sys 0m0.192s 00:06:46.038 07:59:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.038 07:59:16 -- common/autotest_common.sh@10 -- # set +x 00:06:46.038 ************************************ 00:06:46.038 END TEST accel_decomp 00:06:46.038 ************************************ 00:06:46.038 07:59:16 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:46.038 07:59:16 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:46.038 07:59:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.038 07:59:16 -- common/autotest_common.sh@10 -- # set +x 00:06:46.038 ************************************ 00:06:46.038 START TEST accel_decmop_full 00:06:46.038 ************************************ 00:06:46.038 07:59:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:46.038 07:59:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.038 07:59:16 -- accel/accel.sh@17 -- # local accel_module 00:06:46.038 07:59:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:46.038 07:59:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:46.038 07:59:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.038 07:59:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.038 07:59:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.038 07:59:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.038 07:59:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.038 07:59:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.038 07:59:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.038 07:59:16 -- accel/accel.sh@42 -- # jq -r . 00:06:46.038 [2024-06-11 07:59:16.338548] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:46.038 [2024-06-11 07:59:16.338617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid854057 ] 00:06:46.038 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.038 [2024-06-11 07:59:16.399729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.038 [2024-06-11 07:59:16.461646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.981 07:59:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:46.981 00:06:46.981 SPDK Configuration: 00:06:46.981 Core mask: 0x1 00:06:46.981 00:06:46.981 Accel Perf Configuration: 00:06:46.981 Workload Type: decompress 00:06:46.981 Transfer size: 111250 bytes 00:06:46.981 Vector count 1 00:06:46.981 Module: software 00:06:46.981 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:46.981 Queue depth: 32 00:06:46.981 Allocate depth: 32 00:06:46.981 # threads/core: 1 00:06:46.981 Run time: 1 seconds 00:06:46.981 Verify: Yes 00:06:46.981 00:06:46.981 Running for 1 seconds... 00:06:46.981 00:06:46.981 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:46.981 ------------------------------------------------------------------------------------ 00:06:46.981 0,0 4064/s 167 MiB/s 0 0 00:06:46.981 ==================================================================================== 00:06:46.981 Total 4064/s 431 MiB/s 0 0' 00:06:46.981 07:59:17 -- accel/accel.sh@20 -- # IFS=: 00:06:46.981 07:59:17 -- accel/accel.sh@20 -- # read -r var val 00:06:46.981 07:59:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:46.981 07:59:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:46.981 07:59:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.981 07:59:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.981 07:59:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.981 07:59:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.981 07:59:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.981 07:59:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.981 07:59:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.981 07:59:17 -- accel/accel.sh@42 -- # jq -r . 00:06:46.981 [2024-06-11 07:59:17.624967] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:46.981 [2024-06-11 07:59:17.625105] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid854393 ] 00:06:47.242 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.242 [2024-06-11 07:59:17.696307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.242 [2024-06-11 07:59:17.757170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.242 07:59:17 -- accel/accel.sh@21 -- # val= 00:06:47.242 07:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # IFS=: 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # read -r var val 00:06:47.242 07:59:17 -- accel/accel.sh@21 -- # val= 00:06:47.242 07:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # IFS=: 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # read -r var val 00:06:47.242 07:59:17 -- accel/accel.sh@21 -- # val= 00:06:47.242 07:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # IFS=: 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # read -r var val 00:06:47.242 07:59:17 -- accel/accel.sh@21 -- # val=0x1 00:06:47.242 07:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # IFS=: 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # read -r var val 00:06:47.242 07:59:17 -- accel/accel.sh@21 -- # val= 00:06:47.242 07:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # IFS=: 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # read -r var val 00:06:47.242 07:59:17 -- accel/accel.sh@21 -- # val= 00:06:47.242 07:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # IFS=: 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # read -r var val 00:06:47.242 07:59:17 -- accel/accel.sh@21 -- # val=decompress 00:06:47.242 07:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.242 07:59:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # IFS=: 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # read -r var val 00:06:47.242 07:59:17 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:47.242 07:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # IFS=: 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # read -r var val 00:06:47.242 07:59:17 -- accel/accel.sh@21 -- # val= 00:06:47.242 07:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # IFS=: 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # read -r var val 00:06:47.242 07:59:17 -- accel/accel.sh@21 -- # val=software 00:06:47.242 07:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.242 07:59:17 -- accel/accel.sh@23 -- # accel_module=software 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # IFS=: 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # read -r var val 00:06:47.242 07:59:17 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.242 07:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # IFS=: 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # read -r var val 00:06:47.242 07:59:17 -- accel/accel.sh@21 -- # val=32 00:06:47.242 07:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # IFS=: 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # read -r var val 00:06:47.242 07:59:17 -- accel/accel.sh@21 -- # val=32 00:06:47.242 07:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # IFS=: 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # read -r var val 00:06:47.242 07:59:17 -- accel/accel.sh@21 -- # val=1 00:06:47.242 07:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # IFS=: 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # read -r var val 00:06:47.242 07:59:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:47.242 07:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # IFS=: 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # read -r var val 00:06:47.242 07:59:17 -- accel/accel.sh@21 -- # val=Yes 00:06:47.242 07:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # IFS=: 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # read -r var val 00:06:47.242 07:59:17 -- accel/accel.sh@21 -- # val= 00:06:47.242 07:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # IFS=: 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # read -r var val 00:06:47.242 07:59:17 -- accel/accel.sh@21 -- # val= 00:06:47.242 07:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # IFS=: 00:06:47.242 07:59:17 -- accel/accel.sh@20 -- # read -r var val 00:06:48.625 07:59:18 -- accel/accel.sh@21 -- # val= 00:06:48.625 07:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.625 07:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:48.625 07:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:48.625 07:59:18 -- accel/accel.sh@21 -- # val= 00:06:48.625 07:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.625 07:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:48.625 07:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:48.625 07:59:18 -- accel/accel.sh@21 -- # val= 00:06:48.625 07:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.626 07:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:48.626 07:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:48.626 07:59:18 -- accel/accel.sh@21 -- # val= 00:06:48.626 07:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.626 07:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:48.626 07:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:48.626 07:59:18 -- accel/accel.sh@21 -- # val= 00:06:48.626 07:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.626 07:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:48.626 07:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:48.626 07:59:18 -- accel/accel.sh@21 -- # val= 00:06:48.626 07:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.626 07:59:18 -- accel/accel.sh@20 -- # IFS=: 00:06:48.626 07:59:18 -- accel/accel.sh@20 -- # read -r var val 00:06:48.626 07:59:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:48.626 07:59:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:48.626 07:59:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.626 00:06:48.626 real 0m2.586s 00:06:48.626 user 0m2.387s 00:06:48.626 sys 0m0.204s 00:06:48.626 07:59:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.626 07:59:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.626 ************************************ 00:06:48.626 END TEST accel_decmop_full 00:06:48.626 ************************************ 00:06:48.626 07:59:18 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:48.626 07:59:18 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:48.626 07:59:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.626 07:59:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.626 ************************************ 00:06:48.626 START TEST accel_decomp_mcore 00:06:48.626 ************************************ 00:06:48.626 07:59:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:48.626 07:59:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.626 07:59:18 -- accel/accel.sh@17 -- # local accel_module 00:06:48.626 07:59:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:48.626 07:59:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:48.626 07:59:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.626 07:59:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.626 07:59:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.626 07:59:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.626 07:59:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.626 07:59:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.626 07:59:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.626 07:59:18 -- accel/accel.sh@42 -- # jq -r . 00:06:48.626 [2024-06-11 07:59:18.967254] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:48.626 [2024-06-11 07:59:18.967327] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid854599 ] 00:06:48.626 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.626 [2024-06-11 07:59:19.030502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:48.626 [2024-06-11 07:59:19.097414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.626 [2024-06-11 07:59:19.097545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.626 [2024-06-11 07:59:19.097601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.626 [2024-06-11 07:59:19.097601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.009 07:59:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:50.009 00:06:50.009 SPDK Configuration: 00:06:50.009 Core mask: 0xf 00:06:50.009 00:06:50.009 Accel Perf Configuration: 00:06:50.009 Workload Type: decompress 00:06:50.009 Transfer size: 4096 bytes 00:06:50.009 Vector count 1 00:06:50.009 Module: software 00:06:50.009 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:50.009 Queue depth: 32 00:06:50.009 Allocate depth: 32 00:06:50.009 # threads/core: 1 00:06:50.009 Run time: 1 seconds 00:06:50.009 Verify: Yes 00:06:50.009 00:06:50.009 Running for 1 seconds... 00:06:50.009 00:06:50.009 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.009 ------------------------------------------------------------------------------------ 00:06:50.009 0,0 58464/s 107 MiB/s 0 0 00:06:50.009 3,0 58624/s 108 MiB/s 0 0 00:06:50.009 2,0 86464/s 159 MiB/s 0 0 00:06:50.009 1,0 58656/s 108 MiB/s 0 0 00:06:50.009 ==================================================================================== 00:06:50.009 Total 262208/s 1024 MiB/s 0 0' 00:06:50.009 07:59:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.009 07:59:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.009 07:59:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:50.009 07:59:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:50.009 07:59:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.009 07:59:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.009 07:59:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.009 07:59:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.009 07:59:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.009 07:59:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.009 07:59:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.009 07:59:20 -- accel/accel.sh@42 -- # jq -r . 00:06:50.009 [2024-06-11 07:59:20.257544] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:50.009 [2024-06-11 07:59:20.257616] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid854770 ] 00:06:50.009 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.009 [2024-06-11 07:59:20.319973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.009 [2024-06-11 07:59:20.384477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.009 [2024-06-11 07:59:20.384707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.009 [2024-06-11 07:59:20.384707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.009 [2024-06-11 07:59:20.384545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.009 07:59:20 -- accel/accel.sh@21 -- # val= 00:06:50.009 07:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.009 07:59:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.009 07:59:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.009 07:59:20 -- accel/accel.sh@21 -- # val= 00:06:50.009 07:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.009 07:59:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.009 07:59:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.009 07:59:20 -- accel/accel.sh@21 -- # val= 00:06:50.009 07:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.009 07:59:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.009 07:59:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.009 07:59:20 -- accel/accel.sh@21 -- # val=0xf 00:06:50.009 07:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.009 07:59:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.009 07:59:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.009 07:59:20 -- accel/accel.sh@21 -- # val= 00:06:50.009 07:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.009 07:59:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.009 07:59:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.009 07:59:20 -- accel/accel.sh@21 -- # val= 00:06:50.009 07:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.009 07:59:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.009 07:59:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.009 07:59:20 -- accel/accel.sh@21 -- # val=decompress 00:06:50.009 07:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.009 07:59:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:50.009 07:59:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.009 07:59:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.009 07:59:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.009 07:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.009 07:59:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.009 07:59:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.009 07:59:20 -- accel/accel.sh@21 -- # val= 00:06:50.009 07:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.009 07:59:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.009 07:59:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.009 07:59:20 -- accel/accel.sh@21 -- # val=software 00:06:50.010 07:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.010 07:59:20 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.010 07:59:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.010 07:59:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.010 07:59:20 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:50.010 07:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.010 07:59:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.010 07:59:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.010 07:59:20 -- accel/accel.sh@21 -- # val=32 00:06:50.010 07:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.010 07:59:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.010 07:59:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.010 07:59:20 -- accel/accel.sh@21 -- # val=32 00:06:50.010 07:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.010 07:59:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.010 07:59:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.010 07:59:20 -- accel/accel.sh@21 -- # val=1 00:06:50.010 07:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.010 07:59:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.010 07:59:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.010 07:59:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.010 07:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.010 07:59:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.010 07:59:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.010 07:59:20 -- accel/accel.sh@21 -- # val=Yes 00:06:50.010 07:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.010 07:59:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.010 07:59:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.010 07:59:20 -- accel/accel.sh@21 -- # val= 00:06:50.010 07:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.010 07:59:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.010 07:59:20 -- accel/accel.sh@20 -- # read -r var val 00:06:50.010 07:59:20 -- accel/accel.sh@21 -- # val= 00:06:50.010 07:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.010 07:59:20 -- accel/accel.sh@20 -- # IFS=: 00:06:50.010 07:59:20 -- accel/accel.sh@20 -- # read -r var val 00:06:51.013 07:59:21 -- accel/accel.sh@21 -- # val= 00:06:51.013 07:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.013 07:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:51.013 07:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:51.013 07:59:21 -- accel/accel.sh@21 -- # val= 00:06:51.013 07:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.013 07:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:51.013 07:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:51.013 07:59:21 -- accel/accel.sh@21 -- # val= 00:06:51.013 07:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.013 07:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:51.013 07:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:51.013 07:59:21 -- accel/accel.sh@21 -- # val= 00:06:51.013 07:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.013 07:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:51.013 07:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:51.013 07:59:21 -- accel/accel.sh@21 -- # val= 00:06:51.013 07:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.013 07:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:51.013 07:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:51.013 07:59:21 -- accel/accel.sh@21 -- # val= 00:06:51.013 07:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.013 07:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:51.013 07:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:51.013 07:59:21 -- accel/accel.sh@21 -- # val= 00:06:51.013 07:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.013 07:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:51.013 07:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:51.013 07:59:21 -- accel/accel.sh@21 -- # val= 00:06:51.013 07:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.013 07:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:51.013 07:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:51.013 07:59:21 -- accel/accel.sh@21 -- # val= 00:06:51.013 07:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.013 07:59:21 -- accel/accel.sh@20 -- # IFS=: 00:06:51.013 07:59:21 -- accel/accel.sh@20 -- # read -r var val 00:06:51.013 07:59:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.014 07:59:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:51.014 07:59:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.014 00:06:51.014 real 0m2.584s 00:06:51.014 user 0m8.847s 00:06:51.014 sys 0m0.215s 00:06:51.014 07:59:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.014 07:59:21 -- common/autotest_common.sh@10 -- # set +x 00:06:51.014 ************************************ 00:06:51.014 END TEST accel_decomp_mcore 00:06:51.014 ************************************ 00:06:51.014 07:59:21 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:51.014 07:59:21 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:51.014 07:59:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.014 07:59:21 -- common/autotest_common.sh@10 -- # set +x 00:06:51.014 ************************************ 00:06:51.014 START TEST accel_decomp_full_mcore 00:06:51.014 ************************************ 00:06:51.014 07:59:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:51.014 07:59:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.014 07:59:21 -- accel/accel.sh@17 -- # local accel_module 00:06:51.014 07:59:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:51.014 07:59:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:51.014 07:59:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.014 07:59:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.014 07:59:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.014 07:59:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.014 07:59:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.014 07:59:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.014 07:59:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.014 07:59:21 -- accel/accel.sh@42 -- # jq -r . 00:06:51.014 [2024-06-11 07:59:21.595467] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:51.014 [2024-06-11 07:59:21.595536] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid855123 ] 00:06:51.014 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.316 [2024-06-11 07:59:21.657580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.316 [2024-06-11 07:59:21.722784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.316 [2024-06-11 07:59:21.722898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.316 [2024-06-11 07:59:21.723053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.316 [2024-06-11 07:59:21.723054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.403 07:59:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:52.403 00:06:52.403 SPDK Configuration: 00:06:52.403 Core mask: 0xf 00:06:52.403 00:06:52.403 Accel Perf Configuration: 00:06:52.403 Workload Type: decompress 00:06:52.403 Transfer size: 111250 bytes 00:06:52.403 Vector count 1 00:06:52.403 Module: software 00:06:52.403 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:52.403 Queue depth: 32 00:06:52.403 Allocate depth: 32 00:06:52.403 # threads/core: 1 00:06:52.403 Run time: 1 seconds 00:06:52.403 Verify: Yes 00:06:52.403 00:06:52.403 Running for 1 seconds... 00:06:52.403 00:06:52.403 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:52.403 ------------------------------------------------------------------------------------ 00:06:52.403 0,0 4096/s 169 MiB/s 0 0 00:06:52.403 3,0 4096/s 169 MiB/s 0 0 00:06:52.403 2,0 5920/s 244 MiB/s 0 0 00:06:52.403 1,0 4096/s 169 MiB/s 0 0 00:06:52.403 ==================================================================================== 00:06:52.403 Total 18208/s 1931 MiB/s 0 0' 00:06:52.403 07:59:22 -- accel/accel.sh@20 -- # IFS=: 00:06:52.403 07:59:22 -- accel/accel.sh@20 -- # read -r var val 00:06:52.403 07:59:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:52.403 07:59:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:52.403 07:59:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.403 07:59:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.403 07:59:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.403 07:59:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.403 07:59:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.403 07:59:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.403 07:59:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.403 07:59:22 -- accel/accel.sh@42 -- # jq -r . 00:06:52.403 [2024-06-11 07:59:22.896333] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:52.403 [2024-06-11 07:59:22.896432] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid855472 ] 00:06:52.403 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.403 [2024-06-11 07:59:22.959172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:52.403 [2024-06-11 07:59:23.022484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.403 [2024-06-11 07:59:23.022711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.403 [2024-06-11 07:59:23.022712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.403 [2024-06-11 07:59:23.022548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.664 07:59:23 -- accel/accel.sh@21 -- # val= 00:06:52.664 07:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.664 07:59:23 -- accel/accel.sh@21 -- # val= 00:06:52.664 07:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.664 07:59:23 -- accel/accel.sh@21 -- # val= 00:06:52.664 07:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.664 07:59:23 -- accel/accel.sh@21 -- # val=0xf 00:06:52.664 07:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.664 07:59:23 -- accel/accel.sh@21 -- # val= 00:06:52.664 07:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.664 07:59:23 -- accel/accel.sh@21 -- # val= 00:06:52.664 07:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.664 07:59:23 -- accel/accel.sh@21 -- # val=decompress 00:06:52.664 07:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.664 07:59:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.664 07:59:23 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:52.664 07:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.664 07:59:23 -- accel/accel.sh@21 -- # val= 00:06:52.664 07:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.664 07:59:23 -- accel/accel.sh@21 -- # val=software 00:06:52.664 07:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.664 07:59:23 -- accel/accel.sh@23 -- # accel_module=software 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.664 07:59:23 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:52.664 07:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.664 07:59:23 -- accel/accel.sh@21 -- # val=32 00:06:52.664 07:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.664 07:59:23 -- accel/accel.sh@21 -- # val=32 00:06:52.664 07:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.664 07:59:23 -- accel/accel.sh@21 -- # val=1 00:06:52.664 07:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.664 07:59:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:52.664 07:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.664 07:59:23 -- accel/accel.sh@21 -- # val=Yes 00:06:52.664 07:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.664 07:59:23 -- accel/accel.sh@21 -- # val= 00:06:52.664 07:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:52.664 07:59:23 -- accel/accel.sh@21 -- # val= 00:06:52.664 07:59:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # IFS=: 00:06:52.664 07:59:23 -- accel/accel.sh@20 -- # read -r var val 00:06:53.607 07:59:24 -- accel/accel.sh@21 -- # val= 00:06:53.607 07:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.607 07:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:53.607 07:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:53.607 07:59:24 -- accel/accel.sh@21 -- # val= 00:06:53.607 07:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.607 07:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:53.607 07:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:53.607 07:59:24 -- accel/accel.sh@21 -- # val= 00:06:53.607 07:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.607 07:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:53.607 07:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:53.607 07:59:24 -- accel/accel.sh@21 -- # val= 00:06:53.607 07:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.607 07:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:53.607 07:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:53.607 07:59:24 -- accel/accel.sh@21 -- # val= 00:06:53.607 07:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.607 07:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:53.607 07:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:53.607 07:59:24 -- accel/accel.sh@21 -- # val= 00:06:53.607 07:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.607 07:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:53.607 07:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:53.607 07:59:24 -- accel/accel.sh@21 -- # val= 00:06:53.607 07:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.607 07:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:53.607 07:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:53.607 07:59:24 -- accel/accel.sh@21 -- # val= 00:06:53.607 07:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.607 07:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:53.607 07:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:53.607 07:59:24 -- accel/accel.sh@21 -- # val= 00:06:53.607 07:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.607 07:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:53.607 07:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:53.607 07:59:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.607 07:59:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:53.607 07:59:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.607 00:06:53.607 real 0m2.606s 00:06:53.607 user 0m8.936s 00:06:53.607 sys 0m0.215s 00:06:53.607 07:59:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.607 07:59:24 -- common/autotest_common.sh@10 -- # set +x 00:06:53.607 ************************************ 00:06:53.607 END TEST accel_decomp_full_mcore 00:06:53.607 ************************************ 00:06:53.607 07:59:24 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:53.607 07:59:24 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:53.607 07:59:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.607 07:59:24 -- common/autotest_common.sh@10 -- # set +x 00:06:53.607 ************************************ 00:06:53.607 START TEST accel_decomp_mthread 00:06:53.607 ************************************ 00:06:53.607 07:59:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:53.607 07:59:24 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.607 07:59:24 -- accel/accel.sh@17 -- # local accel_module 00:06:53.607 07:59:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:53.607 07:59:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:53.607 07:59:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.607 07:59:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.607 07:59:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.607 07:59:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.607 07:59:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.607 07:59:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.607 07:59:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.607 07:59:24 -- accel/accel.sh@42 -- # jq -r . 00:06:53.607 [2024-06-11 07:59:24.244204] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:53.607 [2024-06-11 07:59:24.244303] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid855806 ] 00:06:53.867 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.867 [2024-06-11 07:59:24.307796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.867 [2024-06-11 07:59:24.371326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.254 07:59:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:55.254 00:06:55.254 SPDK Configuration: 00:06:55.254 Core mask: 0x1 00:06:55.254 00:06:55.254 Accel Perf Configuration: 00:06:55.254 Workload Type: decompress 00:06:55.254 Transfer size: 4096 bytes 00:06:55.254 Vector count 1 00:06:55.254 Module: software 00:06:55.254 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:55.254 Queue depth: 32 00:06:55.254 Allocate depth: 32 00:06:55.254 # threads/core: 2 00:06:55.254 Run time: 1 seconds 00:06:55.254 Verify: Yes 00:06:55.254 00:06:55.254 Running for 1 seconds... 00:06:55.254 00:06:55.254 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.254 ------------------------------------------------------------------------------------ 00:06:55.254 0,1 31872/s 58 MiB/s 0 0 00:06:55.254 0,0 31776/s 58 MiB/s 0 0 00:06:55.254 ==================================================================================== 00:06:55.254 Total 63648/s 248 MiB/s 0 0' 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:55.254 07:59:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:55.254 07:59:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:55.254 07:59:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.254 07:59:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.254 07:59:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.254 07:59:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.254 07:59:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.254 07:59:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.254 07:59:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.254 07:59:25 -- accel/accel.sh@42 -- # jq -r . 00:06:55.254 [2024-06-11 07:59:25.529320] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:55.254 [2024-06-11 07:59:25.529391] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid855932 ] 00:06:55.254 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.254 [2024-06-11 07:59:25.590977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.254 [2024-06-11 07:59:25.652139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.254 07:59:25 -- accel/accel.sh@21 -- # val= 00:06:55.254 07:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:55.254 07:59:25 -- accel/accel.sh@21 -- # val= 00:06:55.254 07:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:55.254 07:59:25 -- accel/accel.sh@21 -- # val= 00:06:55.254 07:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:55.254 07:59:25 -- accel/accel.sh@21 -- # val=0x1 00:06:55.254 07:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:55.254 07:59:25 -- accel/accel.sh@21 -- # val= 00:06:55.254 07:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:55.254 07:59:25 -- accel/accel.sh@21 -- # val= 00:06:55.254 07:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:55.254 07:59:25 -- accel/accel.sh@21 -- # val=decompress 00:06:55.254 07:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.254 07:59:25 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:55.254 07:59:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.254 07:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:55.254 07:59:25 -- accel/accel.sh@21 -- # val= 00:06:55.254 07:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:55.254 07:59:25 -- accel/accel.sh@21 -- # val=software 00:06:55.254 07:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.254 07:59:25 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:55.254 07:59:25 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:55.254 07:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:55.254 07:59:25 -- accel/accel.sh@21 -- # val=32 00:06:55.254 07:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:55.254 07:59:25 -- accel/accel.sh@21 -- # val=32 00:06:55.254 07:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:55.254 07:59:25 -- accel/accel.sh@21 -- # val=2 00:06:55.254 07:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:55.254 07:59:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.254 07:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:55.254 07:59:25 -- accel/accel.sh@21 -- # val=Yes 00:06:55.254 07:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:55.254 07:59:25 -- accel/accel.sh@21 -- # val= 00:06:55.254 07:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:55.254 07:59:25 -- accel/accel.sh@21 -- # val= 00:06:55.254 07:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # IFS=: 00:06:55.254 07:59:25 -- accel/accel.sh@20 -- # read -r var val 00:06:56.197 07:59:26 -- accel/accel.sh@21 -- # val= 00:06:56.197 07:59:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.197 07:59:26 -- accel/accel.sh@20 -- # IFS=: 00:06:56.197 07:59:26 -- accel/accel.sh@20 -- # read -r var val 00:06:56.197 07:59:26 -- accel/accel.sh@21 -- # val= 00:06:56.197 07:59:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.197 07:59:26 -- accel/accel.sh@20 -- # IFS=: 00:06:56.197 07:59:26 -- accel/accel.sh@20 -- # read -r var val 00:06:56.197 07:59:26 -- accel/accel.sh@21 -- # val= 00:06:56.197 07:59:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.197 07:59:26 -- accel/accel.sh@20 -- # IFS=: 00:06:56.197 07:59:26 -- accel/accel.sh@20 -- # read -r var val 00:06:56.197 07:59:26 -- accel/accel.sh@21 -- # val= 00:06:56.197 07:59:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.197 07:59:26 -- accel/accel.sh@20 -- # IFS=: 00:06:56.197 07:59:26 -- accel/accel.sh@20 -- # read -r var val 00:06:56.197 07:59:26 -- accel/accel.sh@21 -- # val= 00:06:56.197 07:59:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.197 07:59:26 -- accel/accel.sh@20 -- # IFS=: 00:06:56.197 07:59:26 -- accel/accel.sh@20 -- # read -r var val 00:06:56.197 07:59:26 -- accel/accel.sh@21 -- # val= 00:06:56.197 07:59:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.197 07:59:26 -- accel/accel.sh@20 -- # IFS=: 00:06:56.197 07:59:26 -- accel/accel.sh@20 -- # read -r var val 00:06:56.197 07:59:26 -- accel/accel.sh@21 -- # val= 00:06:56.197 07:59:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.197 07:59:26 -- accel/accel.sh@20 -- # IFS=: 00:06:56.197 07:59:26 -- accel/accel.sh@20 -- # read -r var val 00:06:56.197 07:59:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.197 07:59:26 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:56.197 07:59:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.197 00:06:56.197 real 0m2.572s 00:06:56.197 user 0m2.382s 00:06:56.197 sys 0m0.196s 00:06:56.197 07:59:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.197 07:59:26 -- common/autotest_common.sh@10 -- # set +x 00:06:56.197 ************************************ 00:06:56.197 END TEST accel_decomp_mthread 00:06:56.197 ************************************ 00:06:56.197 07:59:26 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.197 07:59:26 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:56.197 07:59:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.197 07:59:26 -- common/autotest_common.sh@10 -- # set +x 00:06:56.197 ************************************ 00:06:56.197 START TEST accel_deomp_full_mthread 00:06:56.197 ************************************ 00:06:56.197 07:59:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.197 07:59:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.197 07:59:26 -- accel/accel.sh@17 -- # local accel_module 00:06:56.197 07:59:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.197 07:59:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:56.197 07:59:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.197 07:59:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.197 07:59:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.197 07:59:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.197 07:59:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.197 07:59:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.197 07:59:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.197 07:59:26 -- accel/accel.sh@42 -- # jq -r . 00:06:56.458 [2024-06-11 07:59:26.858212] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:56.458 [2024-06-11 07:59:26.858293] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid856200 ] 00:06:56.458 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.458 [2024-06-11 07:59:26.920859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.458 [2024-06-11 07:59:26.984959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.844 07:59:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:57.844 00:06:57.844 SPDK Configuration: 00:06:57.844 Core mask: 0x1 00:06:57.844 00:06:57.844 Accel Perf Configuration: 00:06:57.844 Workload Type: decompress 00:06:57.844 Transfer size: 111250 bytes 00:06:57.844 Vector count 1 00:06:57.844 Module: software 00:06:57.844 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:57.844 Queue depth: 32 00:06:57.844 Allocate depth: 32 00:06:57.844 # threads/core: 2 00:06:57.844 Run time: 1 seconds 00:06:57.844 Verify: Yes 00:06:57.844 00:06:57.844 Running for 1 seconds... 00:06:57.844 00:06:57.844 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.844 ------------------------------------------------------------------------------------ 00:06:57.844 0,1 2080/s 85 MiB/s 0 0 00:06:57.844 0,0 2048/s 84 MiB/s 0 0 00:06:57.844 ==================================================================================== 00:06:57.844 Total 4128/s 437 MiB/s 0 0' 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.844 07:59:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:57.844 07:59:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:57.844 07:59:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.844 07:59:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.844 07:59:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.844 07:59:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.844 07:59:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.844 07:59:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.844 07:59:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.844 07:59:28 -- accel/accel.sh@42 -- # jq -r . 00:06:57.844 [2024-06-11 07:59:28.165879] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:57.844 [2024-06-11 07:59:28.165978] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid856542 ] 00:06:57.844 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.844 [2024-06-11 07:59:28.227502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.844 [2024-06-11 07:59:28.289781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.844 07:59:28 -- accel/accel.sh@21 -- # val= 00:06:57.844 07:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.844 07:59:28 -- accel/accel.sh@21 -- # val= 00:06:57.844 07:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.844 07:59:28 -- accel/accel.sh@21 -- # val= 00:06:57.844 07:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.844 07:59:28 -- accel/accel.sh@21 -- # val=0x1 00:06:57.844 07:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.844 07:59:28 -- accel/accel.sh@21 -- # val= 00:06:57.844 07:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.844 07:59:28 -- accel/accel.sh@21 -- # val= 00:06:57.844 07:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.844 07:59:28 -- accel/accel.sh@21 -- # val=decompress 00:06:57.844 07:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.844 07:59:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.844 07:59:28 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:57.844 07:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.844 07:59:28 -- accel/accel.sh@21 -- # val= 00:06:57.844 07:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.844 07:59:28 -- accel/accel.sh@21 -- # val=software 00:06:57.844 07:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.844 07:59:28 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.844 07:59:28 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:57.844 07:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.844 07:59:28 -- accel/accel.sh@21 -- # val=32 00:06:57.844 07:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.844 07:59:28 -- accel/accel.sh@21 -- # val=32 00:06:57.844 07:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.844 07:59:28 -- accel/accel.sh@21 -- # val=2 00:06:57.844 07:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.844 07:59:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.844 07:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.844 07:59:28 -- accel/accel.sh@21 -- # val=Yes 00:06:57.844 07:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.844 07:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.845 07:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.845 07:59:28 -- accel/accel.sh@21 -- # val= 00:06:57.845 07:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.845 07:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.845 07:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.845 07:59:28 -- accel/accel.sh@21 -- # val= 00:06:57.845 07:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.845 07:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.845 07:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:59.230 07:59:29 -- accel/accel.sh@21 -- # val= 00:06:59.230 07:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.230 07:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:59.230 07:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:59.230 07:59:29 -- accel/accel.sh@21 -- # val= 00:06:59.230 07:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.230 07:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:59.230 07:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:59.230 07:59:29 -- accel/accel.sh@21 -- # val= 00:06:59.230 07:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.230 07:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:59.230 07:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:59.230 07:59:29 -- accel/accel.sh@21 -- # val= 00:06:59.230 07:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.230 07:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:59.230 07:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:59.230 07:59:29 -- accel/accel.sh@21 -- # val= 00:06:59.230 07:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.230 07:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:59.230 07:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:59.230 07:59:29 -- accel/accel.sh@21 -- # val= 00:06:59.230 07:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.230 07:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:59.230 07:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:59.230 07:59:29 -- accel/accel.sh@21 -- # val= 00:06:59.230 07:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.230 07:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:59.230 07:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:59.230 07:59:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.230 07:59:29 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:59.230 07:59:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.230 00:06:59.230 real 0m2.623s 00:06:59.230 user 0m2.418s 00:06:59.230 sys 0m0.211s 00:06:59.230 07:59:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.230 07:59:29 -- common/autotest_common.sh@10 -- # set +x 00:06:59.230 ************************************ 00:06:59.230 END TEST accel_deomp_full_mthread 00:06:59.230 ************************************ 00:06:59.230 07:59:29 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:59.230 07:59:29 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:59.230 07:59:29 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:59.230 07:59:29 -- accel/accel.sh@129 -- # build_accel_config 00:06:59.230 07:59:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.230 07:59:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.230 07:59:29 -- common/autotest_common.sh@10 -- # set +x 00:06:59.230 07:59:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.230 07:59:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.230 07:59:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.230 07:59:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.230 07:59:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.230 07:59:29 -- accel/accel.sh@42 -- # jq -r . 00:06:59.230 ************************************ 00:06:59.230 START TEST accel_dif_functional_tests 00:06:59.230 ************************************ 00:06:59.230 07:59:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:59.230 [2024-06-11 07:59:29.548641] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:59.230 [2024-06-11 07:59:29.548726] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid856895 ] 00:06:59.230 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.230 [2024-06-11 07:59:29.620388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.230 [2024-06-11 07:59:29.689466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.230 [2024-06-11 07:59:29.689548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.230 [2024-06-11 07:59:29.689551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.230 00:06:59.230 00:06:59.230 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.230 http://cunit.sourceforge.net/ 00:06:59.230 00:06:59.230 00:06:59.230 Suite: accel_dif 00:06:59.230 Test: verify: DIF generated, GUARD check ...passed 00:06:59.230 Test: verify: DIF generated, APPTAG check ...passed 00:06:59.230 Test: verify: DIF generated, REFTAG check ...passed 00:06:59.230 Test: verify: DIF not generated, GUARD check ...[2024-06-11 07:59:29.745306] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:59.230 [2024-06-11 07:59:29.745344] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:59.230 passed 00:06:59.230 Test: verify: DIF not generated, APPTAG check ...[2024-06-11 07:59:29.745372] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:59.230 [2024-06-11 07:59:29.745386] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:59.230 passed 00:06:59.230 Test: verify: DIF not generated, REFTAG check ...[2024-06-11 07:59:29.745404] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:59.230 [2024-06-11 07:59:29.745417] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:59.230 passed 00:06:59.230 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:59.230 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-11 07:59:29.745465] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:59.230 passed 00:06:59.230 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:59.230 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:59.230 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:59.230 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-11 07:59:29.745577] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:59.230 passed 00:06:59.230 Test: generate copy: DIF generated, GUARD check ...passed 00:06:59.230 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:59.230 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:59.230 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:59.230 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:59.230 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:59.230 Test: generate copy: iovecs-len validate ...[2024-06-11 07:59:29.745762] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:59.230 passed 00:06:59.231 Test: generate copy: buffer alignment validate ...passed 00:06:59.231 00:06:59.231 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.231 suites 1 1 n/a 0 0 00:06:59.231 tests 20 20 20 0 0 00:06:59.231 asserts 204 204 204 0 n/a 00:06:59.231 00:06:59.231 Elapsed time = 0.002 seconds 00:06:59.231 00:06:59.231 real 0m0.362s 00:06:59.231 user 0m0.489s 00:06:59.231 sys 0m0.137s 00:06:59.231 07:59:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.231 07:59:29 -- common/autotest_common.sh@10 -- # set +x 00:06:59.231 ************************************ 00:06:59.231 END TEST accel_dif_functional_tests 00:06:59.231 ************************************ 00:06:59.492 00:06:59.492 real 0m54.720s 00:06:59.492 user 1m3.206s 00:06:59.492 sys 0m5.689s 00:06:59.492 07:59:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.492 07:59:29 -- common/autotest_common.sh@10 -- # set +x 00:06:59.492 ************************************ 00:06:59.492 END TEST accel 00:06:59.492 ************************************ 00:06:59.492 07:59:29 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:59.492 07:59:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:59.492 07:59:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.492 07:59:29 -- common/autotest_common.sh@10 -- # set +x 00:06:59.492 ************************************ 00:06:59.492 START TEST accel_rpc 00:06:59.492 ************************************ 00:06:59.492 07:59:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:59.492 * Looking for test storage... 00:06:59.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:59.492 07:59:30 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:59.492 07:59:30 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=856954 00:06:59.492 07:59:30 -- accel/accel_rpc.sh@15 -- # waitforlisten 856954 00:06:59.492 07:59:30 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:59.492 07:59:30 -- common/autotest_common.sh@819 -- # '[' -z 856954 ']' 00:06:59.492 07:59:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.492 07:59:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:59.492 07:59:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.492 07:59:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:59.492 07:59:30 -- common/autotest_common.sh@10 -- # set +x 00:06:59.492 [2024-06-11 07:59:30.087391] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:59.492 [2024-06-11 07:59:30.087459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid856954 ] 00:06:59.492 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.752 [2024-06-11 07:59:30.148514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.752 [2024-06-11 07:59:30.213885] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:59.752 [2024-06-11 07:59:30.214009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.323 07:59:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:00.323 07:59:30 -- common/autotest_common.sh@852 -- # return 0 00:07:00.323 07:59:30 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:00.323 07:59:30 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:00.323 07:59:30 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:00.323 07:59:30 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:00.323 07:59:30 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:00.323 07:59:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:00.323 07:59:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.323 07:59:30 -- common/autotest_common.sh@10 -- # set +x 00:07:00.323 ************************************ 00:07:00.323 START TEST accel_assign_opcode 00:07:00.323 ************************************ 00:07:00.323 07:59:30 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:00.323 07:59:30 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:00.323 07:59:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:00.323 07:59:30 -- common/autotest_common.sh@10 -- # set +x 00:07:00.323 [2024-06-11 07:59:30.843829] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:00.323 07:59:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:00.323 07:59:30 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:00.323 07:59:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:00.323 07:59:30 -- common/autotest_common.sh@10 -- # set +x 00:07:00.323 [2024-06-11 07:59:30.855856] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:00.323 07:59:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:00.323 07:59:30 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:00.323 07:59:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:00.323 07:59:30 -- common/autotest_common.sh@10 -- # set +x 00:07:00.583 07:59:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:00.583 07:59:31 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:00.583 07:59:31 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:00.583 07:59:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:00.583 07:59:31 -- common/autotest_common.sh@10 -- # set +x 00:07:00.583 07:59:31 -- accel/accel_rpc.sh@42 -- # grep software 00:07:00.583 07:59:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:00.583 software 00:07:00.583 00:07:00.583 real 0m0.208s 00:07:00.583 user 0m0.052s 00:07:00.583 sys 0m0.008s 00:07:00.583 07:59:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.583 07:59:31 -- common/autotest_common.sh@10 -- # set +x 00:07:00.583 ************************************ 00:07:00.583 END TEST accel_assign_opcode 00:07:00.583 ************************************ 00:07:00.583 07:59:31 -- accel/accel_rpc.sh@55 -- # killprocess 856954 00:07:00.583 07:59:31 -- common/autotest_common.sh@926 -- # '[' -z 856954 ']' 00:07:00.583 07:59:31 -- common/autotest_common.sh@930 -- # kill -0 856954 00:07:00.583 07:59:31 -- common/autotest_common.sh@931 -- # uname 00:07:00.583 07:59:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:00.583 07:59:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 856954 00:07:00.583 07:59:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:00.583 07:59:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:00.583 07:59:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 856954' 00:07:00.583 killing process with pid 856954 00:07:00.583 07:59:31 -- common/autotest_common.sh@945 -- # kill 856954 00:07:00.583 07:59:31 -- common/autotest_common.sh@950 -- # wait 856954 00:07:00.843 00:07:00.843 real 0m1.403s 00:07:00.843 user 0m1.479s 00:07:00.843 sys 0m0.369s 00:07:00.843 07:59:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.843 07:59:31 -- common/autotest_common.sh@10 -- # set +x 00:07:00.843 ************************************ 00:07:00.843 END TEST accel_rpc 00:07:00.843 ************************************ 00:07:00.843 07:59:31 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:00.843 07:59:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:00.843 07:59:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.843 07:59:31 -- common/autotest_common.sh@10 -- # set +x 00:07:00.843 ************************************ 00:07:00.843 START TEST app_cmdline 00:07:00.843 ************************************ 00:07:00.843 07:59:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:00.843 * Looking for test storage... 00:07:00.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:00.843 07:59:31 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:00.843 07:59:31 -- app/cmdline.sh@17 -- # spdk_tgt_pid=857363 00:07:00.843 07:59:31 -- app/cmdline.sh@18 -- # waitforlisten 857363 00:07:00.844 07:59:31 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:00.844 07:59:31 -- common/autotest_common.sh@819 -- # '[' -z 857363 ']' 00:07:00.844 07:59:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.844 07:59:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:00.844 07:59:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.844 07:59:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:00.844 07:59:31 -- common/autotest_common.sh@10 -- # set +x 00:07:01.104 [2024-06-11 07:59:31.526783] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:01.104 [2024-06-11 07:59:31.526838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid857363 ] 00:07:01.104 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.104 [2024-06-11 07:59:31.587118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.104 [2024-06-11 07:59:31.650995] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:01.104 [2024-06-11 07:59:31.651121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.675 07:59:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:01.675 07:59:32 -- common/autotest_common.sh@852 -- # return 0 00:07:01.675 07:59:32 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:01.935 { 00:07:01.935 "version": "SPDK v24.01.1-pre git sha1 130b9406a", 00:07:01.935 "fields": { 00:07:01.935 "major": 24, 00:07:01.935 "minor": 1, 00:07:01.935 "patch": 1, 00:07:01.935 "suffix": "-pre", 00:07:01.935 "commit": "130b9406a" 00:07:01.935 } 00:07:01.935 } 00:07:01.935 07:59:32 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:01.935 07:59:32 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:01.935 07:59:32 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:01.935 07:59:32 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:01.935 07:59:32 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:01.935 07:59:32 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:01.935 07:59:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:01.935 07:59:32 -- common/autotest_common.sh@10 -- # set +x 00:07:01.935 07:59:32 -- app/cmdline.sh@26 -- # sort 00:07:01.935 07:59:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:01.935 07:59:32 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:01.935 07:59:32 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:01.935 07:59:32 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.935 07:59:32 -- common/autotest_common.sh@640 -- # local es=0 00:07:01.935 07:59:32 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:01.935 07:59:32 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.935 07:59:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:01.935 07:59:32 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.935 07:59:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:01.935 07:59:32 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.935 07:59:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:01.935 07:59:32 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.935 07:59:32 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:01.935 07:59:32 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.196 request: 00:07:02.196 { 00:07:02.196 "method": "env_dpdk_get_mem_stats", 00:07:02.196 "req_id": 1 00:07:02.196 } 00:07:02.196 Got JSON-RPC error response 00:07:02.196 response: 00:07:02.196 { 00:07:02.196 "code": -32601, 00:07:02.196 "message": "Method not found" 00:07:02.196 } 00:07:02.196 07:59:32 -- common/autotest_common.sh@643 -- # es=1 00:07:02.196 07:59:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:02.196 07:59:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:02.196 07:59:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:02.196 07:59:32 -- app/cmdline.sh@1 -- # killprocess 857363 00:07:02.196 07:59:32 -- common/autotest_common.sh@926 -- # '[' -z 857363 ']' 00:07:02.196 07:59:32 -- common/autotest_common.sh@930 -- # kill -0 857363 00:07:02.196 07:59:32 -- common/autotest_common.sh@931 -- # uname 00:07:02.196 07:59:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:02.196 07:59:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 857363 00:07:02.196 07:59:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:02.196 07:59:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:02.196 07:59:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 857363' 00:07:02.196 killing process with pid 857363 00:07:02.196 07:59:32 -- common/autotest_common.sh@945 -- # kill 857363 00:07:02.196 07:59:32 -- common/autotest_common.sh@950 -- # wait 857363 00:07:02.456 00:07:02.456 real 0m1.550s 00:07:02.456 user 0m1.884s 00:07:02.456 sys 0m0.385s 00:07:02.456 07:59:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.456 07:59:32 -- common/autotest_common.sh@10 -- # set +x 00:07:02.456 ************************************ 00:07:02.456 END TEST app_cmdline 00:07:02.456 ************************************ 00:07:02.456 07:59:32 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:02.456 07:59:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:02.456 07:59:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.456 07:59:32 -- common/autotest_common.sh@10 -- # set +x 00:07:02.456 ************************************ 00:07:02.456 START TEST version 00:07:02.456 ************************************ 00:07:02.456 07:59:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:02.456 * Looking for test storage... 00:07:02.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:02.456 07:59:33 -- app/version.sh@17 -- # get_header_version major 00:07:02.456 07:59:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:02.456 07:59:33 -- app/version.sh@14 -- # cut -f2 00:07:02.456 07:59:33 -- app/version.sh@14 -- # tr -d '"' 00:07:02.456 07:59:33 -- app/version.sh@17 -- # major=24 00:07:02.456 07:59:33 -- app/version.sh@18 -- # get_header_version minor 00:07:02.456 07:59:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:02.456 07:59:33 -- app/version.sh@14 -- # cut -f2 00:07:02.456 07:59:33 -- app/version.sh@14 -- # tr -d '"' 00:07:02.456 07:59:33 -- app/version.sh@18 -- # minor=1 00:07:02.456 07:59:33 -- app/version.sh@19 -- # get_header_version patch 00:07:02.456 07:59:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:02.456 07:59:33 -- app/version.sh@14 -- # cut -f2 00:07:02.456 07:59:33 -- app/version.sh@14 -- # tr -d '"' 00:07:02.456 07:59:33 -- app/version.sh@19 -- # patch=1 00:07:02.716 07:59:33 -- app/version.sh@20 -- # get_header_version suffix 00:07:02.716 07:59:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:02.716 07:59:33 -- app/version.sh@14 -- # cut -f2 00:07:02.716 07:59:33 -- app/version.sh@14 -- # tr -d '"' 00:07:02.716 07:59:33 -- app/version.sh@20 -- # suffix=-pre 00:07:02.716 07:59:33 -- app/version.sh@22 -- # version=24.1 00:07:02.716 07:59:33 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:02.716 07:59:33 -- app/version.sh@25 -- # version=24.1.1 00:07:02.716 07:59:33 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:02.716 07:59:33 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:02.716 07:59:33 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:02.716 07:59:33 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:02.716 07:59:33 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:02.716 00:07:02.716 real 0m0.170s 00:07:02.717 user 0m0.082s 00:07:02.717 sys 0m0.126s 00:07:02.717 07:59:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.717 07:59:33 -- common/autotest_common.sh@10 -- # set +x 00:07:02.717 ************************************ 00:07:02.717 END TEST version 00:07:02.717 ************************************ 00:07:02.717 07:59:33 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:02.717 07:59:33 -- spdk/autotest.sh@204 -- # uname -s 00:07:02.717 07:59:33 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:02.717 07:59:33 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:02.717 07:59:33 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:02.717 07:59:33 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:02.717 07:59:33 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:02.717 07:59:33 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:02.717 07:59:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:02.717 07:59:33 -- common/autotest_common.sh@10 -- # set +x 00:07:02.717 07:59:33 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:02.717 07:59:33 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:02.717 07:59:33 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:02.717 07:59:33 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:02.717 07:59:33 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:02.717 07:59:33 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:02.717 07:59:33 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:02.717 07:59:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:02.717 07:59:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.717 07:59:33 -- common/autotest_common.sh@10 -- # set +x 00:07:02.717 ************************************ 00:07:02.717 START TEST nvmf_tcp 00:07:02.717 ************************************ 00:07:02.717 07:59:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:02.717 * Looking for test storage... 00:07:02.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:02.717 07:59:33 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:02.717 07:59:33 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:02.717 07:59:33 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.717 07:59:33 -- nvmf/common.sh@7 -- # uname -s 00:07:02.717 07:59:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.717 07:59:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.717 07:59:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.717 07:59:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.717 07:59:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.717 07:59:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.717 07:59:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.717 07:59:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.717 07:59:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.717 07:59:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.717 07:59:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:02.717 07:59:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:02.717 07:59:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.717 07:59:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.717 07:59:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:02.717 07:59:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:02.717 07:59:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.717 07:59:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.717 07:59:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.717 07:59:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.717 07:59:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.717 07:59:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.717 07:59:33 -- paths/export.sh@5 -- # export PATH 00:07:02.717 07:59:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.717 07:59:33 -- nvmf/common.sh@46 -- # : 0 00:07:02.717 07:59:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:02.717 07:59:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:02.717 07:59:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:02.717 07:59:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.717 07:59:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.717 07:59:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:02.717 07:59:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:02.717 07:59:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:02.977 07:59:33 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:02.977 07:59:33 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:02.977 07:59:33 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:02.977 07:59:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:02.977 07:59:33 -- common/autotest_common.sh@10 -- # set +x 00:07:02.977 07:59:33 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:02.977 07:59:33 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:02.977 07:59:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:02.977 07:59:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.977 07:59:33 -- common/autotest_common.sh@10 -- # set +x 00:07:02.977 ************************************ 00:07:02.977 START TEST nvmf_example 00:07:02.977 ************************************ 00:07:02.977 07:59:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:02.977 * Looking for test storage... 00:07:02.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:02.978 07:59:33 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.978 07:59:33 -- nvmf/common.sh@7 -- # uname -s 00:07:02.978 07:59:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.978 07:59:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.978 07:59:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.978 07:59:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.978 07:59:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.978 07:59:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.978 07:59:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.978 07:59:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.978 07:59:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.978 07:59:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.978 07:59:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:02.978 07:59:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:02.978 07:59:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.978 07:59:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.978 07:59:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:02.978 07:59:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:02.978 07:59:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.978 07:59:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.978 07:59:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.978 07:59:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.978 07:59:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.978 07:59:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.978 07:59:33 -- paths/export.sh@5 -- # export PATH 00:07:02.978 07:59:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.978 07:59:33 -- nvmf/common.sh@46 -- # : 0 00:07:02.978 07:59:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:02.978 07:59:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:02.978 07:59:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:02.978 07:59:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.978 07:59:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.978 07:59:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:02.978 07:59:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:02.978 07:59:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:02.978 07:59:33 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:02.978 07:59:33 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:02.978 07:59:33 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:02.978 07:59:33 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:02.978 07:59:33 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:02.978 07:59:33 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:02.978 07:59:33 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:02.978 07:59:33 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:02.978 07:59:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:02.978 07:59:33 -- common/autotest_common.sh@10 -- # set +x 00:07:02.978 07:59:33 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:02.978 07:59:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:02.978 07:59:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:02.978 07:59:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:02.978 07:59:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:02.978 07:59:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:02.978 07:59:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.978 07:59:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.978 07:59:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.978 07:59:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:02.978 07:59:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:02.978 07:59:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:02.978 07:59:33 -- common/autotest_common.sh@10 -- # set +x 00:07:11.119 07:59:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:11.119 07:59:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:11.119 07:59:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:11.119 07:59:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:11.119 07:59:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:11.119 07:59:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:11.119 07:59:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:11.119 07:59:40 -- nvmf/common.sh@294 -- # net_devs=() 00:07:11.119 07:59:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:11.119 07:59:40 -- nvmf/common.sh@295 -- # e810=() 00:07:11.119 07:59:40 -- nvmf/common.sh@295 -- # local -ga e810 00:07:11.119 07:59:40 -- nvmf/common.sh@296 -- # x722=() 00:07:11.119 07:59:40 -- nvmf/common.sh@296 -- # local -ga x722 00:07:11.119 07:59:40 -- nvmf/common.sh@297 -- # mlx=() 00:07:11.119 07:59:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:11.119 07:59:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:11.119 07:59:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:11.119 07:59:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:11.119 07:59:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:11.119 07:59:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:11.119 07:59:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:11.119 07:59:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:11.119 07:59:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:11.119 07:59:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:11.119 07:59:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:11.119 07:59:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:11.119 07:59:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:11.119 07:59:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:11.119 07:59:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:11.119 07:59:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:11.119 07:59:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:11.119 07:59:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:11.119 07:59:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:11.119 07:59:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:11.119 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:11.119 07:59:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:11.119 07:59:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:11.119 07:59:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.119 07:59:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.119 07:59:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:11.119 07:59:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:11.119 07:59:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:11.119 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:11.119 07:59:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:11.119 07:59:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:11.119 07:59:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.119 07:59:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.119 07:59:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:11.119 07:59:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:11.119 07:59:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:11.119 07:59:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:11.119 07:59:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:11.119 07:59:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.119 07:59:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:11.119 07:59:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.119 07:59:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:11.119 Found net devices under 0000:31:00.0: cvl_0_0 00:07:11.119 07:59:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.119 07:59:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:11.119 07:59:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.119 07:59:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:11.119 07:59:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.119 07:59:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:11.119 Found net devices under 0000:31:00.1: cvl_0_1 00:07:11.119 07:59:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.119 07:59:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:11.119 07:59:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:11.119 07:59:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:11.119 07:59:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:11.119 07:59:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:11.119 07:59:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:11.119 07:59:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:11.119 07:59:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:11.119 07:59:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:11.119 07:59:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:11.119 07:59:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:11.119 07:59:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:11.120 07:59:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:11.120 07:59:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:11.120 07:59:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:11.120 07:59:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:11.120 07:59:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:11.120 07:59:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:11.120 07:59:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:11.120 07:59:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:11.120 07:59:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:11.120 07:59:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:11.120 07:59:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:11.120 07:59:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:11.120 07:59:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:11.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:11.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:07:11.120 00:07:11.120 --- 10.0.0.2 ping statistics --- 00:07:11.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.120 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:07:11.120 07:59:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:11.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:11.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:07:11.120 00:07:11.120 --- 10.0.0.1 ping statistics --- 00:07:11.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.120 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:07:11.120 07:59:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:11.120 07:59:40 -- nvmf/common.sh@410 -- # return 0 00:07:11.120 07:59:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:11.120 07:59:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:11.120 07:59:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:11.120 07:59:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:11.120 07:59:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:11.120 07:59:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:11.120 07:59:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:11.120 07:59:40 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:11.120 07:59:40 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:11.120 07:59:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:11.120 07:59:40 -- common/autotest_common.sh@10 -- # set +x 00:07:11.120 07:59:40 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:11.120 07:59:40 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:11.120 07:59:40 -- target/nvmf_example.sh@34 -- # nvmfpid=861571 00:07:11.120 07:59:40 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:11.120 07:59:40 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:11.120 07:59:40 -- target/nvmf_example.sh@36 -- # waitforlisten 861571 00:07:11.120 07:59:40 -- common/autotest_common.sh@819 -- # '[' -z 861571 ']' 00:07:11.120 07:59:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.120 07:59:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:11.120 07:59:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.120 07:59:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:11.120 07:59:40 -- common/autotest_common.sh@10 -- # set +x 00:07:11.120 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.120 07:59:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:11.120 07:59:41 -- common/autotest_common.sh@852 -- # return 0 00:07:11.120 07:59:41 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:11.120 07:59:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:11.120 07:59:41 -- common/autotest_common.sh@10 -- # set +x 00:07:11.120 07:59:41 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:11.120 07:59:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:11.120 07:59:41 -- common/autotest_common.sh@10 -- # set +x 00:07:11.120 07:59:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:11.120 07:59:41 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:11.120 07:59:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:11.120 07:59:41 -- common/autotest_common.sh@10 -- # set +x 00:07:11.120 07:59:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:11.120 07:59:41 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:11.120 07:59:41 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:11.120 07:59:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:11.120 07:59:41 -- common/autotest_common.sh@10 -- # set +x 00:07:11.120 07:59:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:11.120 07:59:41 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:11.120 07:59:41 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:11.120 07:59:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:11.120 07:59:41 -- common/autotest_common.sh@10 -- # set +x 00:07:11.120 07:59:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:11.120 07:59:41 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.120 07:59:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:11.120 07:59:41 -- common/autotest_common.sh@10 -- # set +x 00:07:11.120 07:59:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:11.120 07:59:41 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:11.120 07:59:41 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:11.120 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.357 Initializing NVMe Controllers 00:07:23.357 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:23.357 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:23.357 Initialization complete. Launching workers. 00:07:23.357 ======================================================== 00:07:23.357 Latency(us) 00:07:23.357 Device Information : IOPS MiB/s Average min max 00:07:23.357 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19152.15 74.81 3341.16 829.58 16153.37 00:07:23.357 ======================================================== 00:07:23.357 Total : 19152.15 74.81 3341.16 829.58 16153.37 00:07:23.357 00:07:23.357 07:59:51 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:23.357 07:59:51 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:23.357 07:59:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:23.357 07:59:51 -- nvmf/common.sh@116 -- # sync 00:07:23.357 07:59:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:23.357 07:59:51 -- nvmf/common.sh@119 -- # set +e 00:07:23.357 07:59:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:23.357 07:59:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:23.357 rmmod nvme_tcp 00:07:23.357 rmmod nvme_fabrics 00:07:23.357 rmmod nvme_keyring 00:07:23.357 07:59:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:23.357 07:59:51 -- nvmf/common.sh@123 -- # set -e 00:07:23.357 07:59:51 -- nvmf/common.sh@124 -- # return 0 00:07:23.357 07:59:51 -- nvmf/common.sh@477 -- # '[' -n 861571 ']' 00:07:23.357 07:59:51 -- nvmf/common.sh@478 -- # killprocess 861571 00:07:23.357 07:59:52 -- common/autotest_common.sh@926 -- # '[' -z 861571 ']' 00:07:23.357 07:59:52 -- common/autotest_common.sh@930 -- # kill -0 861571 00:07:23.357 07:59:52 -- common/autotest_common.sh@931 -- # uname 00:07:23.357 07:59:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:23.357 07:59:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 861571 00:07:23.357 07:59:52 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:07:23.357 07:59:52 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:07:23.357 07:59:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 861571' 00:07:23.357 killing process with pid 861571 00:07:23.357 07:59:52 -- common/autotest_common.sh@945 -- # kill 861571 00:07:23.357 07:59:52 -- common/autotest_common.sh@950 -- # wait 861571 00:07:23.357 nvmf threads initialize successfully 00:07:23.357 bdev subsystem init successfully 00:07:23.357 created a nvmf target service 00:07:23.357 create targets's poll groups done 00:07:23.357 all subsystems of target started 00:07:23.357 nvmf target is running 00:07:23.357 all subsystems of target stopped 00:07:23.357 destroy targets's poll groups done 00:07:23.357 destroyed the nvmf target service 00:07:23.357 bdev subsystem finish successfully 00:07:23.357 nvmf threads destroy successfully 00:07:23.357 07:59:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:23.357 07:59:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:23.357 07:59:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:23.357 07:59:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:23.357 07:59:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:23.357 07:59:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.357 07:59:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:23.357 07:59:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.618 07:59:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:23.618 07:59:54 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:23.618 07:59:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:23.618 07:59:54 -- common/autotest_common.sh@10 -- # set +x 00:07:23.881 00:07:23.881 real 0m20.907s 00:07:23.881 user 0m46.306s 00:07:23.881 sys 0m6.527s 00:07:23.881 07:59:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.881 07:59:54 -- common/autotest_common.sh@10 -- # set +x 00:07:23.881 ************************************ 00:07:23.881 END TEST nvmf_example 00:07:23.881 ************************************ 00:07:23.881 07:59:54 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:23.881 07:59:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:23.881 07:59:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.881 07:59:54 -- common/autotest_common.sh@10 -- # set +x 00:07:23.881 ************************************ 00:07:23.881 START TEST nvmf_filesystem 00:07:23.881 ************************************ 00:07:23.881 07:59:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:23.881 * Looking for test storage... 00:07:23.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.881 07:59:54 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:23.881 07:59:54 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:23.881 07:59:54 -- common/autotest_common.sh@34 -- # set -e 00:07:23.881 07:59:54 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:23.881 07:59:54 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:23.881 07:59:54 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:23.881 07:59:54 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:23.881 07:59:54 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:23.881 07:59:54 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:23.881 07:59:54 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:23.881 07:59:54 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:23.881 07:59:54 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:23.881 07:59:54 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:23.881 07:59:54 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:23.881 07:59:54 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:23.881 07:59:54 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:23.881 07:59:54 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:23.881 07:59:54 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:23.881 07:59:54 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:23.881 07:59:54 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:23.881 07:59:54 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:23.881 07:59:54 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:23.881 07:59:54 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:23.881 07:59:54 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:23.881 07:59:54 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:23.881 07:59:54 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:23.881 07:59:54 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:23.881 07:59:54 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:23.881 07:59:54 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:23.881 07:59:54 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:23.881 07:59:54 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:23.881 07:59:54 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:23.881 07:59:54 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:23.881 07:59:54 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:23.881 07:59:54 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:23.881 07:59:54 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:23.881 07:59:54 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:23.881 07:59:54 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:23.881 07:59:54 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:23.881 07:59:54 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:23.881 07:59:54 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:23.881 07:59:54 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:23.881 07:59:54 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:23.881 07:59:54 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:23.881 07:59:54 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:23.881 07:59:54 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:23.881 07:59:54 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:23.881 07:59:54 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:23.881 07:59:54 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:23.881 07:59:54 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:23.881 07:59:54 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:23.881 07:59:54 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:23.881 07:59:54 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:23.881 07:59:54 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:23.881 07:59:54 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:23.881 07:59:54 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:23.881 07:59:54 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:23.881 07:59:54 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:23.881 07:59:54 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:23.881 07:59:54 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:23.881 07:59:54 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:23.881 07:59:54 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:23.881 07:59:54 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:23.881 07:59:54 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:23.881 07:59:54 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:23.881 07:59:54 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:23.881 07:59:54 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:23.881 07:59:54 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:07:23.881 07:59:54 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:23.881 07:59:54 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:23.881 07:59:54 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:23.881 07:59:54 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:23.881 07:59:54 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:23.881 07:59:54 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:23.881 07:59:54 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:23.881 07:59:54 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:23.881 07:59:54 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:23.881 07:59:54 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:23.881 07:59:54 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:23.881 07:59:54 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:23.881 07:59:54 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:23.881 07:59:54 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:23.881 07:59:54 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:23.881 07:59:54 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:23.881 07:59:54 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:23.881 07:59:54 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:23.881 07:59:54 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:23.881 07:59:54 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:23.881 07:59:54 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:23.882 07:59:54 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:23.882 07:59:54 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:23.882 07:59:54 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:23.882 07:59:54 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:23.882 07:59:54 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:23.882 07:59:54 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:23.882 07:59:54 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:23.882 07:59:54 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:23.882 07:59:54 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:23.882 07:59:54 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:23.882 07:59:54 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:23.882 07:59:54 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:23.882 07:59:54 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:23.882 #define SPDK_CONFIG_H 00:07:23.882 #define SPDK_CONFIG_APPS 1 00:07:23.882 #define SPDK_CONFIG_ARCH native 00:07:23.882 #undef SPDK_CONFIG_ASAN 00:07:23.882 #undef SPDK_CONFIG_AVAHI 00:07:23.882 #undef SPDK_CONFIG_CET 00:07:23.882 #define SPDK_CONFIG_COVERAGE 1 00:07:23.882 #define SPDK_CONFIG_CROSS_PREFIX 00:07:23.882 #undef SPDK_CONFIG_CRYPTO 00:07:23.882 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:23.882 #undef SPDK_CONFIG_CUSTOMOCF 00:07:23.882 #undef SPDK_CONFIG_DAOS 00:07:23.882 #define SPDK_CONFIG_DAOS_DIR 00:07:23.882 #define SPDK_CONFIG_DEBUG 1 00:07:23.882 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:23.882 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:23.882 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:23.882 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:23.882 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:23.882 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:23.882 #define SPDK_CONFIG_EXAMPLES 1 00:07:23.882 #undef SPDK_CONFIG_FC 00:07:23.882 #define SPDK_CONFIG_FC_PATH 00:07:23.882 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:23.882 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:23.882 #undef SPDK_CONFIG_FUSE 00:07:23.882 #undef SPDK_CONFIG_FUZZER 00:07:23.882 #define SPDK_CONFIG_FUZZER_LIB 00:07:23.882 #undef SPDK_CONFIG_GOLANG 00:07:23.882 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:23.882 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:23.882 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:23.882 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:23.882 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:23.882 #define SPDK_CONFIG_IDXD 1 00:07:23.882 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:23.882 #undef SPDK_CONFIG_IPSEC_MB 00:07:23.882 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:23.882 #define SPDK_CONFIG_ISAL 1 00:07:23.882 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:23.882 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:23.882 #define SPDK_CONFIG_LIBDIR 00:07:23.882 #undef SPDK_CONFIG_LTO 00:07:23.882 #define SPDK_CONFIG_MAX_LCORES 00:07:23.882 #define SPDK_CONFIG_NVME_CUSE 1 00:07:23.882 #undef SPDK_CONFIG_OCF 00:07:23.882 #define SPDK_CONFIG_OCF_PATH 00:07:23.882 #define SPDK_CONFIG_OPENSSL_PATH 00:07:23.882 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:23.882 #undef SPDK_CONFIG_PGO_USE 00:07:23.882 #define SPDK_CONFIG_PREFIX /usr/local 00:07:23.882 #undef SPDK_CONFIG_RAID5F 00:07:23.882 #undef SPDK_CONFIG_RBD 00:07:23.882 #define SPDK_CONFIG_RDMA 1 00:07:23.882 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:23.882 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:23.882 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:23.882 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:23.882 #define SPDK_CONFIG_SHARED 1 00:07:23.882 #undef SPDK_CONFIG_SMA 00:07:23.882 #define SPDK_CONFIG_TESTS 1 00:07:23.882 #undef SPDK_CONFIG_TSAN 00:07:23.882 #define SPDK_CONFIG_UBLK 1 00:07:23.882 #define SPDK_CONFIG_UBSAN 1 00:07:23.882 #undef SPDK_CONFIG_UNIT_TESTS 00:07:23.882 #undef SPDK_CONFIG_URING 00:07:23.882 #define SPDK_CONFIG_URING_PATH 00:07:23.882 #undef SPDK_CONFIG_URING_ZNS 00:07:23.882 #undef SPDK_CONFIG_USDT 00:07:23.882 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:23.882 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:23.882 #undef SPDK_CONFIG_VFIO_USER 00:07:23.882 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:23.882 #define SPDK_CONFIG_VHOST 1 00:07:23.882 #define SPDK_CONFIG_VIRTIO 1 00:07:23.882 #undef SPDK_CONFIG_VTUNE 00:07:23.882 #define SPDK_CONFIG_VTUNE_DIR 00:07:23.882 #define SPDK_CONFIG_WERROR 1 00:07:23.882 #define SPDK_CONFIG_WPDK_DIR 00:07:23.882 #undef SPDK_CONFIG_XNVME 00:07:23.882 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:23.882 07:59:54 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:23.882 07:59:54 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.882 07:59:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.882 07:59:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.882 07:59:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.882 07:59:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.882 07:59:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.882 07:59:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.882 07:59:54 -- paths/export.sh@5 -- # export PATH 00:07:23.882 07:59:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.882 07:59:54 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:23.882 07:59:54 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:23.882 07:59:54 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:23.882 07:59:54 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:23.882 07:59:54 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:23.882 07:59:54 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:23.882 07:59:54 -- pm/common@16 -- # TEST_TAG=N/A 00:07:23.882 07:59:54 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:23.882 07:59:54 -- common/autotest_common.sh@52 -- # : 1 00:07:23.882 07:59:54 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:23.882 07:59:54 -- common/autotest_common.sh@56 -- # : 0 00:07:23.882 07:59:54 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:23.882 07:59:54 -- common/autotest_common.sh@58 -- # : 0 00:07:23.882 07:59:54 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:23.882 07:59:54 -- common/autotest_common.sh@60 -- # : 1 00:07:23.882 07:59:54 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:23.882 07:59:54 -- common/autotest_common.sh@62 -- # : 0 00:07:23.882 07:59:54 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:23.882 07:59:54 -- common/autotest_common.sh@64 -- # : 00:07:23.882 07:59:54 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:23.882 07:59:54 -- common/autotest_common.sh@66 -- # : 0 00:07:23.882 07:59:54 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:23.882 07:59:54 -- common/autotest_common.sh@68 -- # : 0 00:07:23.882 07:59:54 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:23.882 07:59:54 -- common/autotest_common.sh@70 -- # : 0 00:07:23.882 07:59:54 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:23.882 07:59:54 -- common/autotest_common.sh@72 -- # : 0 00:07:23.882 07:59:54 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:23.882 07:59:54 -- common/autotest_common.sh@74 -- # : 0 00:07:23.882 07:59:54 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:23.882 07:59:54 -- common/autotest_common.sh@76 -- # : 0 00:07:23.882 07:59:54 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:23.882 07:59:54 -- common/autotest_common.sh@78 -- # : 0 00:07:23.882 07:59:54 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:23.882 07:59:54 -- common/autotest_common.sh@80 -- # : 1 00:07:23.882 07:59:54 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:23.882 07:59:54 -- common/autotest_common.sh@82 -- # : 0 00:07:23.882 07:59:54 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:23.882 07:59:54 -- common/autotest_common.sh@84 -- # : 0 00:07:23.882 07:59:54 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:23.882 07:59:54 -- common/autotest_common.sh@86 -- # : 1 00:07:23.882 07:59:54 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:23.882 07:59:54 -- common/autotest_common.sh@88 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:23.883 07:59:54 -- common/autotest_common.sh@90 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:23.883 07:59:54 -- common/autotest_common.sh@92 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:23.883 07:59:54 -- common/autotest_common.sh@94 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:23.883 07:59:54 -- common/autotest_common.sh@96 -- # : tcp 00:07:23.883 07:59:54 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:23.883 07:59:54 -- common/autotest_common.sh@98 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:23.883 07:59:54 -- common/autotest_common.sh@100 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:23.883 07:59:54 -- common/autotest_common.sh@102 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:23.883 07:59:54 -- common/autotest_common.sh@104 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:23.883 07:59:54 -- common/autotest_common.sh@106 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:23.883 07:59:54 -- common/autotest_common.sh@108 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:23.883 07:59:54 -- common/autotest_common.sh@110 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:23.883 07:59:54 -- common/autotest_common.sh@112 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:23.883 07:59:54 -- common/autotest_common.sh@114 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:23.883 07:59:54 -- common/autotest_common.sh@116 -- # : 1 00:07:23.883 07:59:54 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:23.883 07:59:54 -- common/autotest_common.sh@118 -- # : 00:07:23.883 07:59:54 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:23.883 07:59:54 -- common/autotest_common.sh@120 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:23.883 07:59:54 -- common/autotest_common.sh@122 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:23.883 07:59:54 -- common/autotest_common.sh@124 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:23.883 07:59:54 -- common/autotest_common.sh@126 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:23.883 07:59:54 -- common/autotest_common.sh@128 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:23.883 07:59:54 -- common/autotest_common.sh@130 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:23.883 07:59:54 -- common/autotest_common.sh@132 -- # : 00:07:23.883 07:59:54 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:23.883 07:59:54 -- common/autotest_common.sh@134 -- # : true 00:07:23.883 07:59:54 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:23.883 07:59:54 -- common/autotest_common.sh@136 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:23.883 07:59:54 -- common/autotest_common.sh@138 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:23.883 07:59:54 -- common/autotest_common.sh@140 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:23.883 07:59:54 -- common/autotest_common.sh@142 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:23.883 07:59:54 -- common/autotest_common.sh@144 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:23.883 07:59:54 -- common/autotest_common.sh@146 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:23.883 07:59:54 -- common/autotest_common.sh@148 -- # : e810 00:07:23.883 07:59:54 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:23.883 07:59:54 -- common/autotest_common.sh@150 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:23.883 07:59:54 -- common/autotest_common.sh@152 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:23.883 07:59:54 -- common/autotest_common.sh@154 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:23.883 07:59:54 -- common/autotest_common.sh@156 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:23.883 07:59:54 -- common/autotest_common.sh@158 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:23.883 07:59:54 -- common/autotest_common.sh@160 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:23.883 07:59:54 -- common/autotest_common.sh@163 -- # : 00:07:23.883 07:59:54 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:23.883 07:59:54 -- common/autotest_common.sh@165 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:23.883 07:59:54 -- common/autotest_common.sh@167 -- # : 0 00:07:23.883 07:59:54 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:23.883 07:59:54 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:23.883 07:59:54 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:23.883 07:59:54 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:23.883 07:59:54 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:23.883 07:59:54 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:23.883 07:59:54 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:23.883 07:59:54 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:23.883 07:59:54 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:23.883 07:59:54 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:23.883 07:59:54 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:23.883 07:59:54 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:23.883 07:59:54 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:23.883 07:59:54 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:23.883 07:59:54 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:23.883 07:59:54 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:23.883 07:59:54 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:23.883 07:59:54 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:23.883 07:59:54 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:23.883 07:59:54 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:23.883 07:59:54 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:23.883 07:59:54 -- common/autotest_common.sh@196 -- # cat 00:07:23.883 07:59:54 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:23.883 07:59:54 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:23.883 07:59:54 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:23.883 07:59:54 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:23.883 07:59:54 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:23.883 07:59:54 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:23.883 07:59:54 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:23.883 07:59:54 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:23.883 07:59:54 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:23.883 07:59:54 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:23.883 07:59:54 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:23.883 07:59:54 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:23.883 07:59:54 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:23.884 07:59:54 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:23.884 07:59:54 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:23.884 07:59:54 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:23.884 07:59:54 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:23.884 07:59:54 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:23.884 07:59:54 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:23.884 07:59:54 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:07:23.884 07:59:54 -- common/autotest_common.sh@249 -- # export valgrind= 00:07:23.884 07:59:54 -- common/autotest_common.sh@249 -- # valgrind= 00:07:23.884 07:59:54 -- common/autotest_common.sh@255 -- # uname -s 00:07:23.884 07:59:54 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:07:23.884 07:59:54 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:07:23.884 07:59:54 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:07:23.884 07:59:54 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:07:23.884 07:59:54 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:23.884 07:59:54 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:23.884 07:59:54 -- common/autotest_common.sh@265 -- # MAKE=make 00:07:23.884 07:59:54 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j144 00:07:23.884 07:59:54 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:07:23.884 07:59:54 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:07:23.884 07:59:54 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:23.884 07:59:54 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:07:23.884 07:59:54 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:07:23.884 07:59:54 -- common/autotest_common.sh@291 -- # for i in "$@" 00:07:23.884 07:59:54 -- common/autotest_common.sh@292 -- # case "$i" in 00:07:23.884 07:59:54 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:07:23.884 07:59:54 -- common/autotest_common.sh@309 -- # [[ -z 864533 ]] 00:07:23.884 07:59:54 -- common/autotest_common.sh@309 -- # kill -0 864533 00:07:24.145 07:59:54 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:07:24.145 07:59:54 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:07:24.145 07:59:54 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:07:24.145 07:59:54 -- common/autotest_common.sh@322 -- # local mount target_dir 00:07:24.145 07:59:54 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:07:24.145 07:59:54 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:07:24.145 07:59:54 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:07:24.145 07:59:54 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:07:24.145 07:59:54 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.HN2Gm2 00:07:24.145 07:59:54 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:24.145 07:59:54 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:07:24.145 07:59:54 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:07:24.145 07:59:54 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.HN2Gm2/tests/target /tmp/spdk.HN2Gm2 00:07:24.145 07:59:54 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:07:24.145 07:59:54 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:24.145 07:59:54 -- common/autotest_common.sh@318 -- # df -T 00:07:24.145 07:59:54 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:07:24.145 07:59:54 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:07:24.145 07:59:54 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:07:24.145 07:59:54 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:07:24.145 07:59:54 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:07:24.145 07:59:54 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:07:24.145 07:59:54 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:24.145 07:59:54 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:07:24.145 07:59:54 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:07:24.145 07:59:54 -- common/autotest_common.sh@353 -- # avails["$mount"]=957403136 00:07:24.145 07:59:54 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:07:24.145 07:59:54 -- common/autotest_common.sh@354 -- # uses["$mount"]=4327026688 00:07:24.146 07:59:54 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:24.146 07:59:54 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:07:24.146 07:59:54 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:07:24.146 07:59:54 -- common/autotest_common.sh@353 -- # avails["$mount"]=123951812608 00:07:24.146 07:59:54 -- common/autotest_common.sh@353 -- # sizes["$mount"]=129370963968 00:07:24.146 07:59:54 -- common/autotest_common.sh@354 -- # uses["$mount"]=5419151360 00:07:24.146 07:59:54 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:24.146 07:59:54 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:24.146 07:59:54 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:24.146 07:59:54 -- common/autotest_common.sh@353 -- # avails["$mount"]=64684224512 00:07:24.146 07:59:54 -- common/autotest_common.sh@353 -- # sizes["$mount"]=64685481984 00:07:24.146 07:59:54 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:07:24.146 07:59:54 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:24.146 07:59:54 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:24.146 07:59:54 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:24.146 07:59:54 -- common/autotest_common.sh@353 -- # avails["$mount"]=25864445952 00:07:24.146 07:59:54 -- common/autotest_common.sh@353 -- # sizes["$mount"]=25874194432 00:07:24.146 07:59:54 -- common/autotest_common.sh@354 -- # uses["$mount"]=9748480 00:07:24.146 07:59:54 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:24.146 07:59:54 -- common/autotest_common.sh@352 -- # mounts["$mount"]=efivarfs 00:07:24.146 07:59:54 -- common/autotest_common.sh@352 -- # fss["$mount"]=efivarfs 00:07:24.146 07:59:54 -- common/autotest_common.sh@353 -- # avails["$mount"]=179200 00:07:24.146 07:59:54 -- common/autotest_common.sh@353 -- # sizes["$mount"]=507904 00:07:24.146 07:59:54 -- common/autotest_common.sh@354 -- # uses["$mount"]=324608 00:07:24.146 07:59:54 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:24.146 07:59:54 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:24.146 07:59:54 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:24.146 07:59:54 -- common/autotest_common.sh@353 -- # avails["$mount"]=64685031424 00:07:24.146 07:59:54 -- common/autotest_common.sh@353 -- # sizes["$mount"]=64685481984 00:07:24.146 07:59:54 -- common/autotest_common.sh@354 -- # uses["$mount"]=450560 00:07:24.146 07:59:54 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:24.146 07:59:54 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:24.146 07:59:54 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:24.146 07:59:54 -- common/autotest_common.sh@353 -- # avails["$mount"]=12937089024 00:07:24.146 07:59:54 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12937093120 00:07:24.146 07:59:54 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:07:24.146 07:59:54 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:24.146 07:59:54 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:07:24.146 * Looking for test storage... 00:07:24.146 07:59:54 -- common/autotest_common.sh@359 -- # local target_space new_size 00:07:24.146 07:59:54 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:07:24.146 07:59:54 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:24.146 07:59:54 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:24.146 07:59:54 -- common/autotest_common.sh@363 -- # mount=/ 00:07:24.146 07:59:54 -- common/autotest_common.sh@365 -- # target_space=123951812608 00:07:24.146 07:59:54 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:07:24.146 07:59:54 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:07:24.146 07:59:54 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:07:24.146 07:59:54 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:07:24.146 07:59:54 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:07:24.146 07:59:54 -- common/autotest_common.sh@372 -- # new_size=7633743872 00:07:24.146 07:59:54 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:24.146 07:59:54 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:24.146 07:59:54 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:24.146 07:59:54 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:24.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:24.146 07:59:54 -- common/autotest_common.sh@380 -- # return 0 00:07:24.146 07:59:54 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:07:24.146 07:59:54 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:07:24.146 07:59:54 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:24.146 07:59:54 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:24.146 07:59:54 -- common/autotest_common.sh@1672 -- # true 00:07:24.146 07:59:54 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:07:24.146 07:59:54 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:24.146 07:59:54 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:24.146 07:59:54 -- common/autotest_common.sh@27 -- # exec 00:07:24.146 07:59:54 -- common/autotest_common.sh@29 -- # exec 00:07:24.146 07:59:54 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:24.146 07:59:54 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:24.146 07:59:54 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:24.146 07:59:54 -- common/autotest_common.sh@18 -- # set -x 00:07:24.146 07:59:54 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.146 07:59:54 -- nvmf/common.sh@7 -- # uname -s 00:07:24.146 07:59:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.146 07:59:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.146 07:59:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.146 07:59:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.146 07:59:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.146 07:59:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.146 07:59:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.146 07:59:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.146 07:59:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.146 07:59:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.146 07:59:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:24.146 07:59:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:24.146 07:59:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.146 07:59:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.146 07:59:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.146 07:59:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:24.146 07:59:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.146 07:59:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.146 07:59:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.146 07:59:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.146 07:59:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.146 07:59:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.146 07:59:54 -- paths/export.sh@5 -- # export PATH 00:07:24.146 07:59:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.146 07:59:54 -- nvmf/common.sh@46 -- # : 0 00:07:24.146 07:59:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:24.146 07:59:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:24.146 07:59:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:24.146 07:59:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.146 07:59:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.146 07:59:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:24.146 07:59:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:24.146 07:59:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:24.146 07:59:54 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:24.146 07:59:54 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:24.146 07:59:54 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:24.146 07:59:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:24.146 07:59:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.146 07:59:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:24.146 07:59:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:24.146 07:59:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:24.146 07:59:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.146 07:59:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:24.146 07:59:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.146 07:59:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:24.146 07:59:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:24.147 07:59:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:24.147 07:59:54 -- common/autotest_common.sh@10 -- # set +x 00:07:30.730 08:00:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:30.730 08:00:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:30.730 08:00:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:30.730 08:00:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:30.730 08:00:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:30.730 08:00:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:30.730 08:00:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:30.730 08:00:00 -- nvmf/common.sh@294 -- # net_devs=() 00:07:30.730 08:00:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:30.730 08:00:00 -- nvmf/common.sh@295 -- # e810=() 00:07:30.730 08:00:00 -- nvmf/common.sh@295 -- # local -ga e810 00:07:30.730 08:00:00 -- nvmf/common.sh@296 -- # x722=() 00:07:30.730 08:00:00 -- nvmf/common.sh@296 -- # local -ga x722 00:07:30.730 08:00:00 -- nvmf/common.sh@297 -- # mlx=() 00:07:30.730 08:00:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:30.730 08:00:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:30.730 08:00:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:30.730 08:00:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:30.730 08:00:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:30.730 08:00:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:30.730 08:00:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:30.730 08:00:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:30.730 08:00:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:30.730 08:00:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:30.730 08:00:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:30.730 08:00:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:30.730 08:00:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:30.730 08:00:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:30.730 08:00:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:30.730 08:00:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:30.730 08:00:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:30.730 08:00:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:30.730 08:00:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:30.730 08:00:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:30.730 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:30.730 08:00:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:30.730 08:00:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:30.730 08:00:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.730 08:00:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.730 08:00:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:30.730 08:00:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:30.730 08:00:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:30.730 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:30.730 08:00:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:30.730 08:00:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:30.730 08:00:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.730 08:00:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.730 08:00:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:30.730 08:00:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:30.730 08:00:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:30.730 08:00:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:30.730 08:00:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:30.730 08:00:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.730 08:00:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:30.730 08:00:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.730 08:00:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:30.730 Found net devices under 0000:31:00.0: cvl_0_0 00:07:30.730 08:00:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.730 08:00:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:30.730 08:00:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.730 08:00:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:30.730 08:00:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.730 08:00:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:30.730 Found net devices under 0000:31:00.1: cvl_0_1 00:07:30.730 08:00:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.730 08:00:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:30.730 08:00:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:30.730 08:00:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:30.730 08:00:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:30.730 08:00:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:30.730 08:00:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.730 08:00:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:30.730 08:00:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:30.730 08:00:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:30.730 08:00:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:30.730 08:00:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:30.730 08:00:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:30.730 08:00:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:30.730 08:00:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.730 08:00:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:30.730 08:00:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:30.730 08:00:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:30.730 08:00:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:30.730 08:00:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:30.731 08:00:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:30.731 08:00:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:30.731 08:00:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:30.731 08:00:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:30.731 08:00:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:30.731 08:00:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:30.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:07:30.731 00:07:30.731 --- 10.0.0.2 ping statistics --- 00:07:30.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.731 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:07:30.731 08:00:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:30.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:07:30.731 00:07:30.731 --- 10.0.0.1 ping statistics --- 00:07:30.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.731 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:07:30.731 08:00:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.731 08:00:01 -- nvmf/common.sh@410 -- # return 0 00:07:30.731 08:00:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:30.731 08:00:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.731 08:00:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:30.731 08:00:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:30.731 08:00:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.731 08:00:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:30.731 08:00:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:30.731 08:00:01 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:30.731 08:00:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:30.731 08:00:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:30.731 08:00:01 -- common/autotest_common.sh@10 -- # set +x 00:07:30.731 ************************************ 00:07:30.731 START TEST nvmf_filesystem_no_in_capsule 00:07:30.731 ************************************ 00:07:30.731 08:00:01 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:07:30.731 08:00:01 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:30.731 08:00:01 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:30.731 08:00:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:30.731 08:00:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:30.731 08:00:01 -- common/autotest_common.sh@10 -- # set +x 00:07:30.731 08:00:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:30.731 08:00:01 -- nvmf/common.sh@469 -- # nvmfpid=868276 00:07:30.731 08:00:01 -- nvmf/common.sh@470 -- # waitforlisten 868276 00:07:30.731 08:00:01 -- common/autotest_common.sh@819 -- # '[' -z 868276 ']' 00:07:30.731 08:00:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.731 08:00:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:30.731 08:00:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.731 08:00:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:30.731 08:00:01 -- common/autotest_common.sh@10 -- # set +x 00:07:30.731 [2024-06-11 08:00:01.292660] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:30.731 [2024-06-11 08:00:01.292708] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.731 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.731 [2024-06-11 08:00:01.352889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.991 [2024-06-11 08:00:01.420380] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:30.991 [2024-06-11 08:00:01.420503] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.991 [2024-06-11 08:00:01.420513] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.991 [2024-06-11 08:00:01.420520] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.991 [2024-06-11 08:00:01.420589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.991 [2024-06-11 08:00:01.420703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.991 [2024-06-11 08:00:01.420858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.991 [2024-06-11 08:00:01.420859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.559 08:00:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:31.560 08:00:02 -- common/autotest_common.sh@852 -- # return 0 00:07:31.560 08:00:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:31.560 08:00:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:31.560 08:00:02 -- common/autotest_common.sh@10 -- # set +x 00:07:31.560 08:00:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.560 08:00:02 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:31.560 08:00:02 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:31.560 08:00:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:31.560 08:00:02 -- common/autotest_common.sh@10 -- # set +x 00:07:31.560 [2024-06-11 08:00:02.130712] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.560 08:00:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:31.560 08:00:02 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:31.560 08:00:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:31.560 08:00:02 -- common/autotest_common.sh@10 -- # set +x 00:07:31.818 Malloc1 00:07:31.818 08:00:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:31.818 08:00:02 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:31.818 08:00:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:31.818 08:00:02 -- common/autotest_common.sh@10 -- # set +x 00:07:31.818 08:00:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:31.818 08:00:02 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:31.818 08:00:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:31.818 08:00:02 -- common/autotest_common.sh@10 -- # set +x 00:07:31.818 08:00:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:31.818 08:00:02 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.818 08:00:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:31.818 08:00:02 -- common/autotest_common.sh@10 -- # set +x 00:07:31.818 [2024-06-11 08:00:02.261642] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.818 08:00:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:31.818 08:00:02 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:31.818 08:00:02 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:31.818 08:00:02 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:31.818 08:00:02 -- common/autotest_common.sh@1359 -- # local bs 00:07:31.818 08:00:02 -- common/autotest_common.sh@1360 -- # local nb 00:07:31.818 08:00:02 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:31.818 08:00:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:31.819 08:00:02 -- common/autotest_common.sh@10 -- # set +x 00:07:31.819 08:00:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:31.819 08:00:02 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:31.819 { 00:07:31.819 "name": "Malloc1", 00:07:31.819 "aliases": [ 00:07:31.819 "e112c534-b7c7-4bf7-9bf9-6101b1945022" 00:07:31.819 ], 00:07:31.819 "product_name": "Malloc disk", 00:07:31.819 "block_size": 512, 00:07:31.819 "num_blocks": 1048576, 00:07:31.819 "uuid": "e112c534-b7c7-4bf7-9bf9-6101b1945022", 00:07:31.819 "assigned_rate_limits": { 00:07:31.819 "rw_ios_per_sec": 0, 00:07:31.819 "rw_mbytes_per_sec": 0, 00:07:31.819 "r_mbytes_per_sec": 0, 00:07:31.819 "w_mbytes_per_sec": 0 00:07:31.819 }, 00:07:31.819 "claimed": true, 00:07:31.819 "claim_type": "exclusive_write", 00:07:31.819 "zoned": false, 00:07:31.819 "supported_io_types": { 00:07:31.819 "read": true, 00:07:31.819 "write": true, 00:07:31.819 "unmap": true, 00:07:31.819 "write_zeroes": true, 00:07:31.819 "flush": true, 00:07:31.819 "reset": true, 00:07:31.819 "compare": false, 00:07:31.819 "compare_and_write": false, 00:07:31.819 "abort": true, 00:07:31.819 "nvme_admin": false, 00:07:31.819 "nvme_io": false 00:07:31.819 }, 00:07:31.819 "memory_domains": [ 00:07:31.819 { 00:07:31.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.819 "dma_device_type": 2 00:07:31.819 } 00:07:31.819 ], 00:07:31.819 "driver_specific": {} 00:07:31.819 } 00:07:31.819 ]' 00:07:31.819 08:00:02 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:31.819 08:00:02 -- common/autotest_common.sh@1362 -- # bs=512 00:07:31.819 08:00:02 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:31.819 08:00:02 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:31.819 08:00:02 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:31.819 08:00:02 -- common/autotest_common.sh@1367 -- # echo 512 00:07:31.819 08:00:02 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:31.819 08:00:02 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:33.224 08:00:03 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:33.224 08:00:03 -- common/autotest_common.sh@1177 -- # local i=0 00:07:33.224 08:00:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:33.224 08:00:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:33.224 08:00:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:35.763 08:00:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:35.763 08:00:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:35.763 08:00:05 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:35.763 08:00:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:35.763 08:00:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:35.763 08:00:05 -- common/autotest_common.sh@1187 -- # return 0 00:07:35.763 08:00:05 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:35.763 08:00:05 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:35.763 08:00:05 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:35.763 08:00:05 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:35.763 08:00:05 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:35.763 08:00:05 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:35.763 08:00:05 -- setup/common.sh@80 -- # echo 536870912 00:07:35.763 08:00:05 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:35.763 08:00:05 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:35.763 08:00:05 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:35.763 08:00:05 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:35.763 08:00:06 -- target/filesystem.sh@69 -- # partprobe 00:07:35.763 08:00:06 -- target/filesystem.sh@70 -- # sleep 1 00:07:36.701 08:00:07 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:36.701 08:00:07 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:36.701 08:00:07 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:36.701 08:00:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.701 08:00:07 -- common/autotest_common.sh@10 -- # set +x 00:07:36.701 ************************************ 00:07:36.701 START TEST filesystem_ext4 00:07:36.701 ************************************ 00:07:36.701 08:00:07 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:36.701 08:00:07 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:36.701 08:00:07 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:36.701 08:00:07 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:36.701 08:00:07 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:36.701 08:00:07 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:36.701 08:00:07 -- common/autotest_common.sh@904 -- # local i=0 00:07:36.701 08:00:07 -- common/autotest_common.sh@905 -- # local force 00:07:36.701 08:00:07 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:36.701 08:00:07 -- common/autotest_common.sh@908 -- # force=-F 00:07:36.701 08:00:07 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:36.701 mke2fs 1.46.5 (30-Dec-2021) 00:07:36.701 Discarding device blocks: 0/522240 done 00:07:36.701 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:36.701 Filesystem UUID: c9da8943-6843-40fe-a298-72f49489bb2e 00:07:36.701 Superblock backups stored on blocks: 00:07:36.701 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:36.701 00:07:36.701 Allocating group tables: 0/64 done 00:07:36.701 Writing inode tables: 0/64 done 00:07:36.961 Creating journal (8192 blocks): done 00:07:36.961 Writing superblocks and filesystem accounting information: 0/64 done 00:07:36.961 00:07:36.961 08:00:07 -- common/autotest_common.sh@921 -- # return 0 00:07:36.961 08:00:07 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:37.222 08:00:07 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:37.222 08:00:07 -- target/filesystem.sh@25 -- # sync 00:07:37.222 08:00:07 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:37.222 08:00:07 -- target/filesystem.sh@27 -- # sync 00:07:37.222 08:00:07 -- target/filesystem.sh@29 -- # i=0 00:07:37.222 08:00:07 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:37.222 08:00:07 -- target/filesystem.sh@37 -- # kill -0 868276 00:07:37.222 08:00:07 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:37.222 08:00:07 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:37.222 08:00:07 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:37.222 08:00:07 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:37.222 00:07:37.222 real 0m0.596s 00:07:37.222 user 0m0.020s 00:07:37.222 sys 0m0.050s 00:07:37.222 08:00:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.222 08:00:07 -- common/autotest_common.sh@10 -- # set +x 00:07:37.222 ************************************ 00:07:37.222 END TEST filesystem_ext4 00:07:37.222 ************************************ 00:07:37.222 08:00:07 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:37.222 08:00:07 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:37.222 08:00:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.222 08:00:07 -- common/autotest_common.sh@10 -- # set +x 00:07:37.222 ************************************ 00:07:37.222 START TEST filesystem_btrfs 00:07:37.222 ************************************ 00:07:37.222 08:00:07 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:37.222 08:00:07 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:37.222 08:00:07 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:37.222 08:00:07 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:37.222 08:00:07 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:37.222 08:00:07 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:37.222 08:00:07 -- common/autotest_common.sh@904 -- # local i=0 00:07:37.222 08:00:07 -- common/autotest_common.sh@905 -- # local force 00:07:37.222 08:00:07 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:37.222 08:00:07 -- common/autotest_common.sh@910 -- # force=-f 00:07:37.222 08:00:07 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:37.482 btrfs-progs v6.6.2 00:07:37.482 See https://btrfs.readthedocs.io for more information. 00:07:37.482 00:07:37.482 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:37.482 NOTE: several default settings have changed in version 5.15, please make sure 00:07:37.482 this does not affect your deployments: 00:07:37.482 - DUP for metadata (-m dup) 00:07:37.482 - enabled no-holes (-O no-holes) 00:07:37.482 - enabled free-space-tree (-R free-space-tree) 00:07:37.482 00:07:37.482 Label: (null) 00:07:37.482 UUID: fac2aa05-e09d-471b-81e0-9703b09cd403 00:07:37.482 Node size: 16384 00:07:37.482 Sector size: 4096 00:07:37.482 Filesystem size: 510.00MiB 00:07:37.482 Block group profiles: 00:07:37.482 Data: single 8.00MiB 00:07:37.482 Metadata: DUP 32.00MiB 00:07:37.482 System: DUP 8.00MiB 00:07:37.482 SSD detected: yes 00:07:37.482 Zoned device: no 00:07:37.482 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:37.482 Runtime features: free-space-tree 00:07:37.482 Checksum: crc32c 00:07:37.482 Number of devices: 1 00:07:37.482 Devices: 00:07:37.482 ID SIZE PATH 00:07:37.482 1 510.00MiB /dev/nvme0n1p1 00:07:37.482 00:07:37.482 08:00:08 -- common/autotest_common.sh@921 -- # return 0 00:07:37.482 08:00:08 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:38.422 08:00:08 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:38.422 08:00:08 -- target/filesystem.sh@25 -- # sync 00:07:38.422 08:00:08 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:38.422 08:00:08 -- target/filesystem.sh@27 -- # sync 00:07:38.422 08:00:08 -- target/filesystem.sh@29 -- # i=0 00:07:38.422 08:00:08 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:38.422 08:00:08 -- target/filesystem.sh@37 -- # kill -0 868276 00:07:38.422 08:00:08 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:38.422 08:00:08 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:38.422 08:00:08 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:38.422 08:00:08 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:38.422 00:07:38.422 real 0m0.948s 00:07:38.422 user 0m0.020s 00:07:38.422 sys 0m0.068s 00:07:38.422 08:00:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.422 08:00:08 -- common/autotest_common.sh@10 -- # set +x 00:07:38.422 ************************************ 00:07:38.422 END TEST filesystem_btrfs 00:07:38.422 ************************************ 00:07:38.422 08:00:08 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:38.422 08:00:08 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:38.422 08:00:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.422 08:00:08 -- common/autotest_common.sh@10 -- # set +x 00:07:38.422 ************************************ 00:07:38.422 START TEST filesystem_xfs 00:07:38.422 ************************************ 00:07:38.422 08:00:08 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:38.422 08:00:08 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:38.422 08:00:08 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:38.422 08:00:08 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:38.422 08:00:08 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:38.422 08:00:08 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:38.422 08:00:08 -- common/autotest_common.sh@904 -- # local i=0 00:07:38.422 08:00:08 -- common/autotest_common.sh@905 -- # local force 00:07:38.422 08:00:08 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:38.422 08:00:08 -- common/autotest_common.sh@910 -- # force=-f 00:07:38.422 08:00:08 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:38.422 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:38.422 = sectsz=512 attr=2, projid32bit=1 00:07:38.422 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:38.422 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:38.422 data = bsize=4096 blocks=130560, imaxpct=25 00:07:38.422 = sunit=0 swidth=0 blks 00:07:38.422 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:38.422 log =internal log bsize=4096 blocks=16384, version=2 00:07:38.422 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:38.422 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:39.363 Discarding blocks...Done. 00:07:39.363 08:00:09 -- common/autotest_common.sh@921 -- # return 0 00:07:39.363 08:00:09 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:41.904 08:00:12 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:41.904 08:00:12 -- target/filesystem.sh@25 -- # sync 00:07:41.904 08:00:12 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:41.904 08:00:12 -- target/filesystem.sh@27 -- # sync 00:07:41.904 08:00:12 -- target/filesystem.sh@29 -- # i=0 00:07:41.904 08:00:12 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:41.904 08:00:12 -- target/filesystem.sh@37 -- # kill -0 868276 00:07:41.904 08:00:12 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:41.904 08:00:12 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:41.904 08:00:12 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:41.904 08:00:12 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:41.904 00:07:41.904 real 0m3.607s 00:07:41.904 user 0m0.029s 00:07:41.904 sys 0m0.049s 00:07:41.904 08:00:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.904 08:00:12 -- common/autotest_common.sh@10 -- # set +x 00:07:41.904 ************************************ 00:07:41.904 END TEST filesystem_xfs 00:07:41.904 ************************************ 00:07:41.904 08:00:12 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:42.164 08:00:12 -- target/filesystem.sh@93 -- # sync 00:07:42.164 08:00:12 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:42.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:42.164 08:00:12 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:42.164 08:00:12 -- common/autotest_common.sh@1198 -- # local i=0 00:07:42.164 08:00:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:42.164 08:00:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.425 08:00:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:42.425 08:00:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.425 08:00:12 -- common/autotest_common.sh@1210 -- # return 0 00:07:42.425 08:00:12 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:42.425 08:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:42.425 08:00:12 -- common/autotest_common.sh@10 -- # set +x 00:07:42.425 08:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:42.425 08:00:12 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:42.425 08:00:12 -- target/filesystem.sh@101 -- # killprocess 868276 00:07:42.425 08:00:12 -- common/autotest_common.sh@926 -- # '[' -z 868276 ']' 00:07:42.425 08:00:12 -- common/autotest_common.sh@930 -- # kill -0 868276 00:07:42.425 08:00:12 -- common/autotest_common.sh@931 -- # uname 00:07:42.425 08:00:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:42.425 08:00:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 868276 00:07:42.425 08:00:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:42.425 08:00:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:42.425 08:00:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 868276' 00:07:42.425 killing process with pid 868276 00:07:42.425 08:00:12 -- common/autotest_common.sh@945 -- # kill 868276 00:07:42.425 08:00:12 -- common/autotest_common.sh@950 -- # wait 868276 00:07:42.685 08:00:13 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:42.685 00:07:42.685 real 0m11.875s 00:07:42.685 user 0m46.842s 00:07:42.685 sys 0m0.944s 00:07:42.685 08:00:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.685 08:00:13 -- common/autotest_common.sh@10 -- # set +x 00:07:42.685 ************************************ 00:07:42.685 END TEST nvmf_filesystem_no_in_capsule 00:07:42.685 ************************************ 00:07:42.685 08:00:13 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:42.685 08:00:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:42.685 08:00:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.685 08:00:13 -- common/autotest_common.sh@10 -- # set +x 00:07:42.685 ************************************ 00:07:42.685 START TEST nvmf_filesystem_in_capsule 00:07:42.685 ************************************ 00:07:42.685 08:00:13 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:07:42.685 08:00:13 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:42.685 08:00:13 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:42.685 08:00:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:42.685 08:00:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:42.685 08:00:13 -- common/autotest_common.sh@10 -- # set +x 00:07:42.685 08:00:13 -- nvmf/common.sh@469 -- # nvmfpid=871386 00:07:42.685 08:00:13 -- nvmf/common.sh@470 -- # waitforlisten 871386 00:07:42.685 08:00:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:42.685 08:00:13 -- common/autotest_common.sh@819 -- # '[' -z 871386 ']' 00:07:42.685 08:00:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.685 08:00:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:42.685 08:00:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.685 08:00:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:42.685 08:00:13 -- common/autotest_common.sh@10 -- # set +x 00:07:42.685 [2024-06-11 08:00:13.236653] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:42.685 [2024-06-11 08:00:13.236708] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.685 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.685 [2024-06-11 08:00:13.302866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.945 [2024-06-11 08:00:13.370893] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:42.945 [2024-06-11 08:00:13.371026] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.945 [2024-06-11 08:00:13.371037] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.945 [2024-06-11 08:00:13.371045] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.945 [2024-06-11 08:00:13.371182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.945 [2024-06-11 08:00:13.371312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.945 [2024-06-11 08:00:13.371482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.945 [2024-06-11 08:00:13.371492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.516 08:00:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:43.516 08:00:14 -- common/autotest_common.sh@852 -- # return 0 00:07:43.516 08:00:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:43.516 08:00:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:43.516 08:00:14 -- common/autotest_common.sh@10 -- # set +x 00:07:43.516 08:00:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.516 08:00:14 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:43.516 08:00:14 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:43.516 08:00:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.516 08:00:14 -- common/autotest_common.sh@10 -- # set +x 00:07:43.516 [2024-06-11 08:00:14.050652] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.516 08:00:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.516 08:00:14 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:43.516 08:00:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.516 08:00:14 -- common/autotest_common.sh@10 -- # set +x 00:07:43.516 Malloc1 00:07:43.516 08:00:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.516 08:00:14 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:43.516 08:00:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.516 08:00:14 -- common/autotest_common.sh@10 -- # set +x 00:07:43.516 08:00:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.516 08:00:14 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:43.516 08:00:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.516 08:00:14 -- common/autotest_common.sh@10 -- # set +x 00:07:43.777 08:00:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.777 08:00:14 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.777 08:00:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.777 08:00:14 -- common/autotest_common.sh@10 -- # set +x 00:07:43.777 [2024-06-11 08:00:14.175400] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.777 08:00:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.777 08:00:14 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:43.777 08:00:14 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:43.777 08:00:14 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:43.777 08:00:14 -- common/autotest_common.sh@1359 -- # local bs 00:07:43.777 08:00:14 -- common/autotest_common.sh@1360 -- # local nb 00:07:43.777 08:00:14 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:43.777 08:00:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.777 08:00:14 -- common/autotest_common.sh@10 -- # set +x 00:07:43.777 08:00:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.777 08:00:14 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:43.777 { 00:07:43.777 "name": "Malloc1", 00:07:43.777 "aliases": [ 00:07:43.777 "625c0d9b-2694-482f-9e28-df6a402a1419" 00:07:43.777 ], 00:07:43.777 "product_name": "Malloc disk", 00:07:43.777 "block_size": 512, 00:07:43.777 "num_blocks": 1048576, 00:07:43.777 "uuid": "625c0d9b-2694-482f-9e28-df6a402a1419", 00:07:43.777 "assigned_rate_limits": { 00:07:43.777 "rw_ios_per_sec": 0, 00:07:43.777 "rw_mbytes_per_sec": 0, 00:07:43.777 "r_mbytes_per_sec": 0, 00:07:43.777 "w_mbytes_per_sec": 0 00:07:43.777 }, 00:07:43.777 "claimed": true, 00:07:43.777 "claim_type": "exclusive_write", 00:07:43.777 "zoned": false, 00:07:43.777 "supported_io_types": { 00:07:43.777 "read": true, 00:07:43.777 "write": true, 00:07:43.777 "unmap": true, 00:07:43.777 "write_zeroes": true, 00:07:43.777 "flush": true, 00:07:43.777 "reset": true, 00:07:43.777 "compare": false, 00:07:43.777 "compare_and_write": false, 00:07:43.777 "abort": true, 00:07:43.777 "nvme_admin": false, 00:07:43.777 "nvme_io": false 00:07:43.777 }, 00:07:43.777 "memory_domains": [ 00:07:43.777 { 00:07:43.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.777 "dma_device_type": 2 00:07:43.777 } 00:07:43.777 ], 00:07:43.777 "driver_specific": {} 00:07:43.777 } 00:07:43.777 ]' 00:07:43.777 08:00:14 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:43.777 08:00:14 -- common/autotest_common.sh@1362 -- # bs=512 00:07:43.777 08:00:14 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:43.777 08:00:14 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:43.777 08:00:14 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:43.777 08:00:14 -- common/autotest_common.sh@1367 -- # echo 512 00:07:43.777 08:00:14 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:43.777 08:00:14 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:45.175 08:00:15 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:45.175 08:00:15 -- common/autotest_common.sh@1177 -- # local i=0 00:07:45.175 08:00:15 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:45.175 08:00:15 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:45.175 08:00:15 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:47.717 08:00:17 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:47.717 08:00:17 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:47.717 08:00:17 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:47.717 08:00:17 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:47.717 08:00:17 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:47.717 08:00:17 -- common/autotest_common.sh@1187 -- # return 0 00:07:47.717 08:00:17 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:47.717 08:00:17 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:47.717 08:00:17 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:47.717 08:00:17 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:47.717 08:00:17 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:47.717 08:00:17 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:47.717 08:00:17 -- setup/common.sh@80 -- # echo 536870912 00:07:47.717 08:00:17 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:47.717 08:00:17 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:47.717 08:00:17 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:47.717 08:00:17 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:47.717 08:00:18 -- target/filesystem.sh@69 -- # partprobe 00:07:47.717 08:00:18 -- target/filesystem.sh@70 -- # sleep 1 00:07:48.670 08:00:19 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:48.670 08:00:19 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:48.670 08:00:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:48.670 08:00:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:48.670 08:00:19 -- common/autotest_common.sh@10 -- # set +x 00:07:48.670 ************************************ 00:07:48.670 START TEST filesystem_in_capsule_ext4 00:07:48.670 ************************************ 00:07:48.670 08:00:19 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:48.670 08:00:19 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:48.670 08:00:19 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:48.670 08:00:19 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:48.670 08:00:19 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:48.670 08:00:19 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:48.670 08:00:19 -- common/autotest_common.sh@904 -- # local i=0 00:07:48.670 08:00:19 -- common/autotest_common.sh@905 -- # local force 00:07:48.670 08:00:19 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:48.670 08:00:19 -- common/autotest_common.sh@908 -- # force=-F 00:07:48.670 08:00:19 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:48.670 mke2fs 1.46.5 (30-Dec-2021) 00:07:48.670 Discarding device blocks: 0/522240 done 00:07:48.670 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:48.670 Filesystem UUID: 9d6c8459-9b51-4d27-9006-b77033ee343e 00:07:48.670 Superblock backups stored on blocks: 00:07:48.670 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:48.670 00:07:48.670 Allocating group tables: 0/64 done 00:07:48.670 Writing inode tables: 0/64 done 00:07:48.670 Creating journal (8192 blocks): done 00:07:49.762 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:07:49.762 00:07:49.762 08:00:20 -- common/autotest_common.sh@921 -- # return 0 00:07:49.762 08:00:20 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:50.022 08:00:20 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:50.022 08:00:20 -- target/filesystem.sh@25 -- # sync 00:07:50.022 08:00:20 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:50.022 08:00:20 -- target/filesystem.sh@27 -- # sync 00:07:50.022 08:00:20 -- target/filesystem.sh@29 -- # i=0 00:07:50.022 08:00:20 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:50.022 08:00:20 -- target/filesystem.sh@37 -- # kill -0 871386 00:07:50.022 08:00:20 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:50.022 08:00:20 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:50.022 08:00:20 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:50.022 08:00:20 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:50.022 00:07:50.022 real 0m1.519s 00:07:50.022 user 0m0.025s 00:07:50.022 sys 0m0.051s 00:07:50.022 08:00:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.022 08:00:20 -- common/autotest_common.sh@10 -- # set +x 00:07:50.022 ************************************ 00:07:50.022 END TEST filesystem_in_capsule_ext4 00:07:50.022 ************************************ 00:07:50.022 08:00:20 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:50.022 08:00:20 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:50.022 08:00:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:50.022 08:00:20 -- common/autotest_common.sh@10 -- # set +x 00:07:50.022 ************************************ 00:07:50.022 START TEST filesystem_in_capsule_btrfs 00:07:50.022 ************************************ 00:07:50.022 08:00:20 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:50.022 08:00:20 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:50.022 08:00:20 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:50.022 08:00:20 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:50.022 08:00:20 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:50.022 08:00:20 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:50.022 08:00:20 -- common/autotest_common.sh@904 -- # local i=0 00:07:50.022 08:00:20 -- common/autotest_common.sh@905 -- # local force 00:07:50.022 08:00:20 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:50.022 08:00:20 -- common/autotest_common.sh@910 -- # force=-f 00:07:50.022 08:00:20 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:50.593 btrfs-progs v6.6.2 00:07:50.593 See https://btrfs.readthedocs.io for more information. 00:07:50.593 00:07:50.593 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:50.593 NOTE: several default settings have changed in version 5.15, please make sure 00:07:50.593 this does not affect your deployments: 00:07:50.593 - DUP for metadata (-m dup) 00:07:50.593 - enabled no-holes (-O no-holes) 00:07:50.593 - enabled free-space-tree (-R free-space-tree) 00:07:50.593 00:07:50.593 Label: (null) 00:07:50.593 UUID: dc9ac3bc-d886-47f5-81b3-6cb8973767ca 00:07:50.593 Node size: 16384 00:07:50.593 Sector size: 4096 00:07:50.593 Filesystem size: 510.00MiB 00:07:50.593 Block group profiles: 00:07:50.593 Data: single 8.00MiB 00:07:50.593 Metadata: DUP 32.00MiB 00:07:50.593 System: DUP 8.00MiB 00:07:50.593 SSD detected: yes 00:07:50.593 Zoned device: no 00:07:50.593 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:50.593 Runtime features: free-space-tree 00:07:50.593 Checksum: crc32c 00:07:50.593 Number of devices: 1 00:07:50.593 Devices: 00:07:50.593 ID SIZE PATH 00:07:50.593 1 510.00MiB /dev/nvme0n1p1 00:07:50.593 00:07:50.593 08:00:21 -- common/autotest_common.sh@921 -- # return 0 00:07:50.593 08:00:21 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:51.163 08:00:21 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:51.163 08:00:21 -- target/filesystem.sh@25 -- # sync 00:07:51.163 08:00:21 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:51.163 08:00:21 -- target/filesystem.sh@27 -- # sync 00:07:51.163 08:00:21 -- target/filesystem.sh@29 -- # i=0 00:07:51.163 08:00:21 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:51.163 08:00:21 -- target/filesystem.sh@37 -- # kill -0 871386 00:07:51.163 08:00:21 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:51.163 08:00:21 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:51.163 08:00:21 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:51.163 08:00:21 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:51.163 00:07:51.163 real 0m0.924s 00:07:51.163 user 0m0.021s 00:07:51.163 sys 0m0.067s 00:07:51.163 08:00:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.163 08:00:21 -- common/autotest_common.sh@10 -- # set +x 00:07:51.163 ************************************ 00:07:51.163 END TEST filesystem_in_capsule_btrfs 00:07:51.163 ************************************ 00:07:51.163 08:00:21 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:51.163 08:00:21 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:51.163 08:00:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:51.163 08:00:21 -- common/autotest_common.sh@10 -- # set +x 00:07:51.163 ************************************ 00:07:51.163 START TEST filesystem_in_capsule_xfs 00:07:51.163 ************************************ 00:07:51.163 08:00:21 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:51.163 08:00:21 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:51.163 08:00:21 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:51.163 08:00:21 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:51.163 08:00:21 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:51.163 08:00:21 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:51.163 08:00:21 -- common/autotest_common.sh@904 -- # local i=0 00:07:51.163 08:00:21 -- common/autotest_common.sh@905 -- # local force 00:07:51.163 08:00:21 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:51.163 08:00:21 -- common/autotest_common.sh@910 -- # force=-f 00:07:51.163 08:00:21 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:51.163 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:51.163 = sectsz=512 attr=2, projid32bit=1 00:07:51.163 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:51.163 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:51.163 data = bsize=4096 blocks=130560, imaxpct=25 00:07:51.163 = sunit=0 swidth=0 blks 00:07:51.163 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:51.163 log =internal log bsize=4096 blocks=16384, version=2 00:07:51.163 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:51.163 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:52.104 Discarding blocks...Done. 00:07:52.104 08:00:22 -- common/autotest_common.sh@921 -- # return 0 00:07:52.104 08:00:22 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:54.018 08:00:24 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:54.279 08:00:24 -- target/filesystem.sh@25 -- # sync 00:07:54.279 08:00:24 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:54.279 08:00:24 -- target/filesystem.sh@27 -- # sync 00:07:54.279 08:00:24 -- target/filesystem.sh@29 -- # i=0 00:07:54.279 08:00:24 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:54.279 08:00:24 -- target/filesystem.sh@37 -- # kill -0 871386 00:07:54.279 08:00:24 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:54.279 08:00:24 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:54.279 08:00:24 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:54.279 08:00:24 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:54.279 00:07:54.279 real 0m3.127s 00:07:54.279 user 0m0.026s 00:07:54.279 sys 0m0.052s 00:07:54.279 08:00:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.279 08:00:24 -- common/autotest_common.sh@10 -- # set +x 00:07:54.279 ************************************ 00:07:54.279 END TEST filesystem_in_capsule_xfs 00:07:54.279 ************************************ 00:07:54.279 08:00:24 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:54.539 08:00:25 -- target/filesystem.sh@93 -- # sync 00:07:54.800 08:00:25 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:55.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:55.061 08:00:25 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:55.061 08:00:25 -- common/autotest_common.sh@1198 -- # local i=0 00:07:55.061 08:00:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:55.061 08:00:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:55.061 08:00:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:55.061 08:00:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:55.061 08:00:25 -- common/autotest_common.sh@1210 -- # return 0 00:07:55.061 08:00:25 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:55.061 08:00:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.061 08:00:25 -- common/autotest_common.sh@10 -- # set +x 00:07:55.061 08:00:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.061 08:00:25 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:55.061 08:00:25 -- target/filesystem.sh@101 -- # killprocess 871386 00:07:55.061 08:00:25 -- common/autotest_common.sh@926 -- # '[' -z 871386 ']' 00:07:55.061 08:00:25 -- common/autotest_common.sh@930 -- # kill -0 871386 00:07:55.061 08:00:25 -- common/autotest_common.sh@931 -- # uname 00:07:55.061 08:00:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:55.061 08:00:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 871386 00:07:55.061 08:00:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:55.061 08:00:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:55.061 08:00:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 871386' 00:07:55.061 killing process with pid 871386 00:07:55.061 08:00:25 -- common/autotest_common.sh@945 -- # kill 871386 00:07:55.061 08:00:25 -- common/autotest_common.sh@950 -- # wait 871386 00:07:55.322 08:00:25 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:55.322 00:07:55.322 real 0m12.636s 00:07:55.322 user 0m49.790s 00:07:55.322 sys 0m0.961s 00:07:55.322 08:00:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.322 08:00:25 -- common/autotest_common.sh@10 -- # set +x 00:07:55.322 ************************************ 00:07:55.322 END TEST nvmf_filesystem_in_capsule 00:07:55.322 ************************************ 00:07:55.322 08:00:25 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:55.322 08:00:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:55.322 08:00:25 -- nvmf/common.sh@116 -- # sync 00:07:55.322 08:00:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:55.322 08:00:25 -- nvmf/common.sh@119 -- # set +e 00:07:55.322 08:00:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:55.322 08:00:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:55.322 rmmod nvme_tcp 00:07:55.322 rmmod nvme_fabrics 00:07:55.322 rmmod nvme_keyring 00:07:55.322 08:00:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:55.322 08:00:25 -- nvmf/common.sh@123 -- # set -e 00:07:55.322 08:00:25 -- nvmf/common.sh@124 -- # return 0 00:07:55.322 08:00:25 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:07:55.322 08:00:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:55.322 08:00:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:55.322 08:00:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:55.322 08:00:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:55.322 08:00:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:55.322 08:00:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.322 08:00:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.322 08:00:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.868 08:00:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:57.868 00:07:57.868 real 0m33.644s 00:07:57.868 user 1m38.592s 00:07:57.868 sys 0m6.979s 00:07:57.868 08:00:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.868 08:00:27 -- common/autotest_common.sh@10 -- # set +x 00:07:57.868 ************************************ 00:07:57.868 END TEST nvmf_filesystem 00:07:57.868 ************************************ 00:07:57.868 08:00:28 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:57.868 08:00:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:57.868 08:00:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.868 08:00:28 -- common/autotest_common.sh@10 -- # set +x 00:07:57.868 ************************************ 00:07:57.868 START TEST nvmf_discovery 00:07:57.868 ************************************ 00:07:57.868 08:00:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:57.868 * Looking for test storage... 00:07:57.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.868 08:00:28 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.868 08:00:28 -- nvmf/common.sh@7 -- # uname -s 00:07:57.868 08:00:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.868 08:00:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.868 08:00:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.868 08:00:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.868 08:00:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.868 08:00:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.868 08:00:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.868 08:00:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.868 08:00:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.868 08:00:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.868 08:00:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:57.868 08:00:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:57.868 08:00:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.868 08:00:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.868 08:00:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.868 08:00:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.868 08:00:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.868 08:00:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.868 08:00:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.868 08:00:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.868 08:00:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.868 08:00:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.868 08:00:28 -- paths/export.sh@5 -- # export PATH 00:07:57.868 08:00:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.868 08:00:28 -- nvmf/common.sh@46 -- # : 0 00:07:57.868 08:00:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:57.868 08:00:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:57.868 08:00:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:57.868 08:00:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.868 08:00:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.868 08:00:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:57.868 08:00:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:57.868 08:00:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:57.868 08:00:28 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:57.868 08:00:28 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:57.868 08:00:28 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:57.868 08:00:28 -- target/discovery.sh@15 -- # hash nvme 00:07:57.868 08:00:28 -- target/discovery.sh@20 -- # nvmftestinit 00:07:57.868 08:00:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:57.868 08:00:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.868 08:00:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:57.869 08:00:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:57.869 08:00:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:57.869 08:00:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.869 08:00:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.869 08:00:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.869 08:00:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:57.869 08:00:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:57.869 08:00:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:57.869 08:00:28 -- common/autotest_common.sh@10 -- # set +x 00:08:04.455 08:00:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:04.455 08:00:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:04.455 08:00:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:04.455 08:00:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:04.455 08:00:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:04.455 08:00:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:04.455 08:00:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:04.455 08:00:34 -- nvmf/common.sh@294 -- # net_devs=() 00:08:04.455 08:00:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:04.455 08:00:34 -- nvmf/common.sh@295 -- # e810=() 00:08:04.455 08:00:34 -- nvmf/common.sh@295 -- # local -ga e810 00:08:04.455 08:00:34 -- nvmf/common.sh@296 -- # x722=() 00:08:04.455 08:00:34 -- nvmf/common.sh@296 -- # local -ga x722 00:08:04.455 08:00:34 -- nvmf/common.sh@297 -- # mlx=() 00:08:04.455 08:00:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:04.455 08:00:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.455 08:00:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.455 08:00:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.455 08:00:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.455 08:00:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.455 08:00:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.455 08:00:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.455 08:00:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.455 08:00:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.455 08:00:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.455 08:00:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.455 08:00:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:04.455 08:00:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:04.455 08:00:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:04.455 08:00:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:04.455 08:00:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:04.455 08:00:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:04.455 08:00:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:04.455 08:00:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:04.455 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:04.455 08:00:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:04.455 08:00:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:04.455 08:00:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.455 08:00:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.455 08:00:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:04.455 08:00:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:04.455 08:00:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:04.455 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:04.455 08:00:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:04.455 08:00:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:04.455 08:00:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.455 08:00:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.455 08:00:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:04.455 08:00:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:04.455 08:00:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:04.455 08:00:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:04.455 08:00:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:04.455 08:00:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.455 08:00:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:04.455 08:00:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.455 08:00:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:04.455 Found net devices under 0000:31:00.0: cvl_0_0 00:08:04.456 08:00:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.456 08:00:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:04.456 08:00:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.456 08:00:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:04.456 08:00:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.456 08:00:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:04.456 Found net devices under 0000:31:00.1: cvl_0_1 00:08:04.456 08:00:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.456 08:00:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:04.456 08:00:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:04.456 08:00:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:04.456 08:00:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:04.456 08:00:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:04.456 08:00:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.456 08:00:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.456 08:00:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:04.456 08:00:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:04.456 08:00:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:04.456 08:00:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:04.456 08:00:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:04.456 08:00:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:04.456 08:00:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.456 08:00:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:04.456 08:00:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:04.456 08:00:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:04.456 08:00:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:04.717 08:00:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:04.717 08:00:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:04.717 08:00:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:04.717 08:00:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:04.717 08:00:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:04.717 08:00:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:04.717 08:00:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:04.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.817 ms 00:08:04.717 00:08:04.717 --- 10.0.0.2 ping statistics --- 00:08:04.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.717 rtt min/avg/max/mdev = 0.817/0.817/0.817/0.000 ms 00:08:04.717 08:00:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:04.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:08:04.717 00:08:04.717 --- 10.0.0.1 ping statistics --- 00:08:04.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.717 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:08:04.717 08:00:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.717 08:00:35 -- nvmf/common.sh@410 -- # return 0 00:08:04.717 08:00:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:04.717 08:00:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.717 08:00:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:04.717 08:00:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:04.717 08:00:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.717 08:00:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:04.717 08:00:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:04.717 08:00:35 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:04.717 08:00:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:04.717 08:00:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:04.717 08:00:35 -- common/autotest_common.sh@10 -- # set +x 00:08:04.717 08:00:35 -- nvmf/common.sh@469 -- # nvmfpid=878319 00:08:04.717 08:00:35 -- nvmf/common.sh@470 -- # waitforlisten 878319 00:08:04.717 08:00:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:04.717 08:00:35 -- common/autotest_common.sh@819 -- # '[' -z 878319 ']' 00:08:04.717 08:00:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.717 08:00:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:04.717 08:00:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.717 08:00:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:04.717 08:00:35 -- common/autotest_common.sh@10 -- # set +x 00:08:04.717 [2024-06-11 08:00:35.349686] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:04.717 [2024-06-11 08:00:35.349748] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.977 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.977 [2024-06-11 08:00:35.421398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.977 [2024-06-11 08:00:35.495561] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:04.977 [2024-06-11 08:00:35.495695] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.977 [2024-06-11 08:00:35.495705] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.977 [2024-06-11 08:00:35.495714] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.977 [2024-06-11 08:00:35.495875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.977 [2024-06-11 08:00:35.495999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.977 [2024-06-11 08:00:35.496161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.977 [2024-06-11 08:00:35.496162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.545 08:00:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:05.545 08:00:36 -- common/autotest_common.sh@852 -- # return 0 00:08:05.545 08:00:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:05.545 08:00:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:05.545 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:05.545 08:00:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:05.545 08:00:36 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:05.545 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.545 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:05.545 [2024-06-11 08:00:36.173621] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:05.545 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.545 08:00:36 -- target/discovery.sh@26 -- # seq 1 4 00:08:05.545 08:00:36 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:05.545 08:00:36 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:05.545 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.545 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:05.804 Null1 00:08:05.804 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.804 08:00:36 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:05.804 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.804 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:05.804 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.804 08:00:36 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:05.804 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.804 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:05.804 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.804 08:00:36 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.804 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.804 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:05.804 [2024-06-11 08:00:36.229906] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.804 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.804 08:00:36 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:05.804 08:00:36 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:05.804 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.804 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:05.804 Null2 00:08:05.804 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.804 08:00:36 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:05.804 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.804 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:05.804 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.804 08:00:36 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:05.804 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.804 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:05.804 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.804 08:00:36 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:05.804 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.804 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:05.804 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.804 08:00:36 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:05.804 08:00:36 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:05.804 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.804 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:05.804 Null3 00:08:05.804 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.804 08:00:36 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:05.804 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.804 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:05.804 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.804 08:00:36 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:05.804 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.804 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:05.804 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.804 08:00:36 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:05.804 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.804 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:05.804 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.804 08:00:36 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:05.804 08:00:36 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:05.804 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.804 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:05.804 Null4 00:08:05.804 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.804 08:00:36 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:05.804 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.804 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:05.804 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.804 08:00:36 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:05.804 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.804 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:05.805 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.805 08:00:36 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:05.805 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.805 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:05.805 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.805 08:00:36 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:05.805 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.805 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:05.805 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.805 08:00:36 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:05.805 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.805 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:05.805 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.805 08:00:36 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:08:06.064 00:08:06.064 Discovery Log Number of Records 6, Generation counter 6 00:08:06.064 =====Discovery Log Entry 0====== 00:08:06.064 trtype: tcp 00:08:06.064 adrfam: ipv4 00:08:06.064 subtype: current discovery subsystem 00:08:06.064 treq: not required 00:08:06.064 portid: 0 00:08:06.064 trsvcid: 4420 00:08:06.064 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:06.064 traddr: 10.0.0.2 00:08:06.064 eflags: explicit discovery connections, duplicate discovery information 00:08:06.064 sectype: none 00:08:06.064 =====Discovery Log Entry 1====== 00:08:06.064 trtype: tcp 00:08:06.064 adrfam: ipv4 00:08:06.064 subtype: nvme subsystem 00:08:06.064 treq: not required 00:08:06.064 portid: 0 00:08:06.064 trsvcid: 4420 00:08:06.064 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:06.064 traddr: 10.0.0.2 00:08:06.064 eflags: none 00:08:06.064 sectype: none 00:08:06.064 =====Discovery Log Entry 2====== 00:08:06.064 trtype: tcp 00:08:06.064 adrfam: ipv4 00:08:06.064 subtype: nvme subsystem 00:08:06.064 treq: not required 00:08:06.064 portid: 0 00:08:06.064 trsvcid: 4420 00:08:06.064 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:06.064 traddr: 10.0.0.2 00:08:06.064 eflags: none 00:08:06.064 sectype: none 00:08:06.064 =====Discovery Log Entry 3====== 00:08:06.064 trtype: tcp 00:08:06.064 adrfam: ipv4 00:08:06.064 subtype: nvme subsystem 00:08:06.064 treq: not required 00:08:06.064 portid: 0 00:08:06.064 trsvcid: 4420 00:08:06.064 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:06.064 traddr: 10.0.0.2 00:08:06.064 eflags: none 00:08:06.064 sectype: none 00:08:06.064 =====Discovery Log Entry 4====== 00:08:06.064 trtype: tcp 00:08:06.064 adrfam: ipv4 00:08:06.064 subtype: nvme subsystem 00:08:06.064 treq: not required 00:08:06.064 portid: 0 00:08:06.064 trsvcid: 4420 00:08:06.064 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:06.064 traddr: 10.0.0.2 00:08:06.064 eflags: none 00:08:06.064 sectype: none 00:08:06.064 =====Discovery Log Entry 5====== 00:08:06.064 trtype: tcp 00:08:06.064 adrfam: ipv4 00:08:06.064 subtype: discovery subsystem referral 00:08:06.064 treq: not required 00:08:06.064 portid: 0 00:08:06.064 trsvcid: 4430 00:08:06.064 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:06.064 traddr: 10.0.0.2 00:08:06.064 eflags: none 00:08:06.064 sectype: none 00:08:06.064 08:00:36 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:06.064 Perform nvmf subsystem discovery via RPC 00:08:06.064 08:00:36 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:06.064 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.064 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:06.064 [2024-06-11 08:00:36.474591] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:06.064 [ 00:08:06.064 { 00:08:06.064 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:06.064 "subtype": "Discovery", 00:08:06.064 "listen_addresses": [ 00:08:06.064 { 00:08:06.064 "transport": "TCP", 00:08:06.064 "trtype": "TCP", 00:08:06.064 "adrfam": "IPv4", 00:08:06.064 "traddr": "10.0.0.2", 00:08:06.064 "trsvcid": "4420" 00:08:06.064 } 00:08:06.064 ], 00:08:06.064 "allow_any_host": true, 00:08:06.064 "hosts": [] 00:08:06.064 }, 00:08:06.064 { 00:08:06.064 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:06.064 "subtype": "NVMe", 00:08:06.064 "listen_addresses": [ 00:08:06.064 { 00:08:06.064 "transport": "TCP", 00:08:06.064 "trtype": "TCP", 00:08:06.064 "adrfam": "IPv4", 00:08:06.064 "traddr": "10.0.0.2", 00:08:06.064 "trsvcid": "4420" 00:08:06.064 } 00:08:06.064 ], 00:08:06.064 "allow_any_host": true, 00:08:06.064 "hosts": [], 00:08:06.064 "serial_number": "SPDK00000000000001", 00:08:06.064 "model_number": "SPDK bdev Controller", 00:08:06.064 "max_namespaces": 32, 00:08:06.064 "min_cntlid": 1, 00:08:06.064 "max_cntlid": 65519, 00:08:06.064 "namespaces": [ 00:08:06.064 { 00:08:06.064 "nsid": 1, 00:08:06.064 "bdev_name": "Null1", 00:08:06.064 "name": "Null1", 00:08:06.064 "nguid": "FE2ADFAAB38D454DB024D6042A0C11C7", 00:08:06.064 "uuid": "fe2adfaa-b38d-454d-b024-d6042a0c11c7" 00:08:06.064 } 00:08:06.064 ] 00:08:06.064 }, 00:08:06.064 { 00:08:06.064 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:06.064 "subtype": "NVMe", 00:08:06.064 "listen_addresses": [ 00:08:06.064 { 00:08:06.064 "transport": "TCP", 00:08:06.064 "trtype": "TCP", 00:08:06.064 "adrfam": "IPv4", 00:08:06.064 "traddr": "10.0.0.2", 00:08:06.064 "trsvcid": "4420" 00:08:06.064 } 00:08:06.064 ], 00:08:06.064 "allow_any_host": true, 00:08:06.064 "hosts": [], 00:08:06.064 "serial_number": "SPDK00000000000002", 00:08:06.064 "model_number": "SPDK bdev Controller", 00:08:06.064 "max_namespaces": 32, 00:08:06.064 "min_cntlid": 1, 00:08:06.064 "max_cntlid": 65519, 00:08:06.064 "namespaces": [ 00:08:06.064 { 00:08:06.064 "nsid": 1, 00:08:06.064 "bdev_name": "Null2", 00:08:06.064 "name": "Null2", 00:08:06.064 "nguid": "1622653AD61F4AD8AD66F58A8225C526", 00:08:06.064 "uuid": "1622653a-d61f-4ad8-ad66-f58a8225c526" 00:08:06.064 } 00:08:06.064 ] 00:08:06.064 }, 00:08:06.064 { 00:08:06.064 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:06.064 "subtype": "NVMe", 00:08:06.064 "listen_addresses": [ 00:08:06.064 { 00:08:06.064 "transport": "TCP", 00:08:06.064 "trtype": "TCP", 00:08:06.064 "adrfam": "IPv4", 00:08:06.064 "traddr": "10.0.0.2", 00:08:06.064 "trsvcid": "4420" 00:08:06.064 } 00:08:06.064 ], 00:08:06.064 "allow_any_host": true, 00:08:06.064 "hosts": [], 00:08:06.064 "serial_number": "SPDK00000000000003", 00:08:06.064 "model_number": "SPDK bdev Controller", 00:08:06.064 "max_namespaces": 32, 00:08:06.064 "min_cntlid": 1, 00:08:06.064 "max_cntlid": 65519, 00:08:06.064 "namespaces": [ 00:08:06.064 { 00:08:06.064 "nsid": 1, 00:08:06.064 "bdev_name": "Null3", 00:08:06.064 "name": "Null3", 00:08:06.064 "nguid": "B77B315DD4FC462390F1BF06EBF20493", 00:08:06.064 "uuid": "b77b315d-d4fc-4623-90f1-bf06ebf20493" 00:08:06.064 } 00:08:06.064 ] 00:08:06.064 }, 00:08:06.064 { 00:08:06.064 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:06.064 "subtype": "NVMe", 00:08:06.064 "listen_addresses": [ 00:08:06.064 { 00:08:06.064 "transport": "TCP", 00:08:06.064 "trtype": "TCP", 00:08:06.064 "adrfam": "IPv4", 00:08:06.064 "traddr": "10.0.0.2", 00:08:06.064 "trsvcid": "4420" 00:08:06.064 } 00:08:06.064 ], 00:08:06.064 "allow_any_host": true, 00:08:06.064 "hosts": [], 00:08:06.064 "serial_number": "SPDK00000000000004", 00:08:06.064 "model_number": "SPDK bdev Controller", 00:08:06.064 "max_namespaces": 32, 00:08:06.064 "min_cntlid": 1, 00:08:06.064 "max_cntlid": 65519, 00:08:06.064 "namespaces": [ 00:08:06.064 { 00:08:06.064 "nsid": 1, 00:08:06.064 "bdev_name": "Null4", 00:08:06.064 "name": "Null4", 00:08:06.064 "nguid": "87B2FC49EF884B1698C73F440EB6764B", 00:08:06.064 "uuid": "87b2fc49-ef88-4b16-98c7-3f440eb6764b" 00:08:06.065 } 00:08:06.065 ] 00:08:06.065 } 00:08:06.065 ] 00:08:06.065 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.065 08:00:36 -- target/discovery.sh@42 -- # seq 1 4 00:08:06.065 08:00:36 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:06.065 08:00:36 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:06.065 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.065 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.065 08:00:36 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:06.065 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.065 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.065 08:00:36 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:06.065 08:00:36 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:06.065 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.065 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.065 08:00:36 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:06.065 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.065 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.065 08:00:36 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:06.065 08:00:36 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:06.065 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.065 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.065 08:00:36 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:06.065 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.065 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.065 08:00:36 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:06.065 08:00:36 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:06.065 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.065 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.065 08:00:36 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:06.065 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.065 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.065 08:00:36 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:06.065 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.065 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.065 08:00:36 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:06.065 08:00:36 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:06.065 08:00:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.065 08:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:06.065 08:00:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.065 08:00:36 -- target/discovery.sh@49 -- # check_bdevs= 00:08:06.065 08:00:36 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:06.065 08:00:36 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:06.065 08:00:36 -- target/discovery.sh@57 -- # nvmftestfini 00:08:06.065 08:00:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:06.065 08:00:36 -- nvmf/common.sh@116 -- # sync 00:08:06.065 08:00:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:06.065 08:00:36 -- nvmf/common.sh@119 -- # set +e 00:08:06.065 08:00:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:06.065 08:00:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:06.065 rmmod nvme_tcp 00:08:06.065 rmmod nvme_fabrics 00:08:06.065 rmmod nvme_keyring 00:08:06.065 08:00:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:06.065 08:00:36 -- nvmf/common.sh@123 -- # set -e 00:08:06.065 08:00:36 -- nvmf/common.sh@124 -- # return 0 00:08:06.065 08:00:36 -- nvmf/common.sh@477 -- # '[' -n 878319 ']' 00:08:06.065 08:00:36 -- nvmf/common.sh@478 -- # killprocess 878319 00:08:06.065 08:00:36 -- common/autotest_common.sh@926 -- # '[' -z 878319 ']' 00:08:06.065 08:00:36 -- common/autotest_common.sh@930 -- # kill -0 878319 00:08:06.065 08:00:36 -- common/autotest_common.sh@931 -- # uname 00:08:06.324 08:00:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:06.324 08:00:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 878319 00:08:06.324 08:00:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:06.324 08:00:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:06.324 08:00:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 878319' 00:08:06.324 killing process with pid 878319 00:08:06.324 08:00:36 -- common/autotest_common.sh@945 -- # kill 878319 00:08:06.324 [2024-06-11 08:00:36.761279] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:06.324 08:00:36 -- common/autotest_common.sh@950 -- # wait 878319 00:08:06.324 08:00:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:06.324 08:00:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:06.324 08:00:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:06.324 08:00:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:06.324 08:00:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:06.324 08:00:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.324 08:00:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.324 08:00:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.865 08:00:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:08.865 00:08:08.865 real 0m10.940s 00:08:08.865 user 0m7.742s 00:08:08.865 sys 0m5.644s 00:08:08.865 08:00:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.865 08:00:38 -- common/autotest_common.sh@10 -- # set +x 00:08:08.865 ************************************ 00:08:08.865 END TEST nvmf_discovery 00:08:08.865 ************************************ 00:08:08.865 08:00:39 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:08.865 08:00:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:08.865 08:00:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.865 08:00:39 -- common/autotest_common.sh@10 -- # set +x 00:08:08.865 ************************************ 00:08:08.865 START TEST nvmf_referrals 00:08:08.865 ************************************ 00:08:08.865 08:00:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:08.865 * Looking for test storage... 00:08:08.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.865 08:00:39 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.865 08:00:39 -- nvmf/common.sh@7 -- # uname -s 00:08:08.865 08:00:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.865 08:00:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.865 08:00:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.865 08:00:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.865 08:00:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.865 08:00:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.865 08:00:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.865 08:00:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.865 08:00:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.865 08:00:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.865 08:00:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:08.865 08:00:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:08.865 08:00:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.865 08:00:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.865 08:00:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.865 08:00:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.865 08:00:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.865 08:00:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.865 08:00:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.865 08:00:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.865 08:00:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.865 08:00:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.865 08:00:39 -- paths/export.sh@5 -- # export PATH 00:08:08.865 08:00:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.865 08:00:39 -- nvmf/common.sh@46 -- # : 0 00:08:08.865 08:00:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:08.865 08:00:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:08.865 08:00:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:08.865 08:00:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.865 08:00:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.865 08:00:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:08.865 08:00:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:08.865 08:00:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:08.865 08:00:39 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:08.865 08:00:39 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:08.865 08:00:39 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:08.865 08:00:39 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:08.865 08:00:39 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:08.865 08:00:39 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:08.865 08:00:39 -- target/referrals.sh@37 -- # nvmftestinit 00:08:08.865 08:00:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:08.865 08:00:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.865 08:00:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:08.865 08:00:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:08.865 08:00:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:08.865 08:00:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.865 08:00:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.865 08:00:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.865 08:00:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:08.865 08:00:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:08.865 08:00:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:08.865 08:00:39 -- common/autotest_common.sh@10 -- # set +x 00:08:15.443 08:00:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:15.443 08:00:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:15.443 08:00:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:15.443 08:00:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:15.443 08:00:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:15.443 08:00:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:15.443 08:00:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:15.443 08:00:45 -- nvmf/common.sh@294 -- # net_devs=() 00:08:15.443 08:00:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:15.443 08:00:45 -- nvmf/common.sh@295 -- # e810=() 00:08:15.443 08:00:45 -- nvmf/common.sh@295 -- # local -ga e810 00:08:15.443 08:00:45 -- nvmf/common.sh@296 -- # x722=() 00:08:15.443 08:00:45 -- nvmf/common.sh@296 -- # local -ga x722 00:08:15.443 08:00:45 -- nvmf/common.sh@297 -- # mlx=() 00:08:15.443 08:00:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:15.443 08:00:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.443 08:00:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.443 08:00:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.443 08:00:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.443 08:00:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.443 08:00:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.443 08:00:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.443 08:00:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.443 08:00:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.443 08:00:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.443 08:00:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.443 08:00:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:15.443 08:00:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:15.443 08:00:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:15.443 08:00:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:15.443 08:00:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:15.443 08:00:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:15.443 08:00:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:15.443 08:00:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:15.443 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:15.444 08:00:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:15.444 08:00:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:15.444 08:00:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.444 08:00:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.444 08:00:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:15.444 08:00:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:15.444 08:00:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:15.444 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:15.444 08:00:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:15.444 08:00:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:15.444 08:00:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.444 08:00:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.444 08:00:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:15.444 08:00:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:15.444 08:00:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:15.444 08:00:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:15.444 08:00:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:15.444 08:00:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.444 08:00:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:15.444 08:00:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.444 08:00:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:15.444 Found net devices under 0000:31:00.0: cvl_0_0 00:08:15.444 08:00:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.444 08:00:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:15.444 08:00:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.444 08:00:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:15.444 08:00:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.444 08:00:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:15.444 Found net devices under 0000:31:00.1: cvl_0_1 00:08:15.444 08:00:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.444 08:00:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:15.444 08:00:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:15.444 08:00:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:15.444 08:00:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:15.444 08:00:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:15.444 08:00:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.444 08:00:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.444 08:00:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.444 08:00:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:15.444 08:00:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.444 08:00:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.444 08:00:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:15.444 08:00:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.444 08:00:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.444 08:00:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:15.444 08:00:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:15.444 08:00:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.444 08:00:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.444 08:00:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.444 08:00:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.444 08:00:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:15.444 08:00:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.704 08:00:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.704 08:00:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.704 08:00:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:15.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:08:15.704 00:08:15.704 --- 10.0.0.2 ping statistics --- 00:08:15.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.704 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:08:15.704 08:00:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:08:15.704 00:08:15.704 --- 10.0.0.1 ping statistics --- 00:08:15.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.704 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:08:15.704 08:00:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.704 08:00:46 -- nvmf/common.sh@410 -- # return 0 00:08:15.704 08:00:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:15.704 08:00:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.704 08:00:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:15.704 08:00:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:15.704 08:00:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.704 08:00:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:15.704 08:00:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:15.704 08:00:46 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:15.704 08:00:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:15.704 08:00:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:15.704 08:00:46 -- common/autotest_common.sh@10 -- # set +x 00:08:15.704 08:00:46 -- nvmf/common.sh@469 -- # nvmfpid=883001 00:08:15.704 08:00:46 -- nvmf/common.sh@470 -- # waitforlisten 883001 00:08:15.704 08:00:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:15.704 08:00:46 -- common/autotest_common.sh@819 -- # '[' -z 883001 ']' 00:08:15.704 08:00:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.704 08:00:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:15.704 08:00:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.705 08:00:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:15.705 08:00:46 -- common/autotest_common.sh@10 -- # set +x 00:08:15.705 [2024-06-11 08:00:46.274318] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:15.705 [2024-06-11 08:00:46.274382] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.705 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.705 [2024-06-11 08:00:46.345180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.964 [2024-06-11 08:00:46.417960] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:15.964 [2024-06-11 08:00:46.418091] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.964 [2024-06-11 08:00:46.418102] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.964 [2024-06-11 08:00:46.418110] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.964 [2024-06-11 08:00:46.418276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.964 [2024-06-11 08:00:46.418382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.964 [2024-06-11 08:00:46.418545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.964 [2024-06-11 08:00:46.418545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.536 08:00:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:16.536 08:00:47 -- common/autotest_common.sh@852 -- # return 0 00:08:16.536 08:00:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:16.536 08:00:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:16.536 08:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:16.536 08:00:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.536 08:00:47 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:16.536 08:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.536 08:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:16.536 [2024-06-11 08:00:47.102623] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.536 08:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.536 08:00:47 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:16.536 08:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.536 08:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:16.536 [2024-06-11 08:00:47.118793] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:16.536 08:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.536 08:00:47 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:16.536 08:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.536 08:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:16.536 08:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.536 08:00:47 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:16.536 08:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.536 08:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:16.536 08:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.536 08:00:47 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:16.536 08:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.536 08:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:16.536 08:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.536 08:00:47 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:16.536 08:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.536 08:00:47 -- target/referrals.sh@48 -- # jq length 00:08:16.536 08:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:16.536 08:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.797 08:00:47 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:16.797 08:00:47 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:16.797 08:00:47 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:16.797 08:00:47 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:16.797 08:00:47 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:16.797 08:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:16.797 08:00:47 -- target/referrals.sh@21 -- # sort 00:08:16.797 08:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:16.797 08:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:16.797 08:00:47 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:16.797 08:00:47 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:16.797 08:00:47 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:16.797 08:00:47 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:16.797 08:00:47 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:16.797 08:00:47 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:16.797 08:00:47 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:16.797 08:00:47 -- target/referrals.sh@26 -- # sort 00:08:17.059 08:00:47 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:17.059 08:00:47 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:17.059 08:00:47 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:17.059 08:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.059 08:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.059 08:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.059 08:00:47 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:17.059 08:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.059 08:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.059 08:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.059 08:00:47 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:17.059 08:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.059 08:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.059 08:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.059 08:00:47 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:17.059 08:00:47 -- target/referrals.sh@56 -- # jq length 00:08:17.059 08:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.059 08:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.059 08:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.059 08:00:47 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:17.059 08:00:47 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:17.059 08:00:47 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:17.059 08:00:47 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:17.059 08:00:47 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.059 08:00:47 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:17.059 08:00:47 -- target/referrals.sh@26 -- # sort 00:08:17.059 08:00:47 -- target/referrals.sh@26 -- # echo 00:08:17.059 08:00:47 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:17.059 08:00:47 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:17.059 08:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.059 08:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.059 08:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.059 08:00:47 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:17.059 08:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.059 08:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.059 08:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.059 08:00:47 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:17.059 08:00:47 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:17.059 08:00:47 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:17.059 08:00:47 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:17.059 08:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.059 08:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.059 08:00:47 -- target/referrals.sh@21 -- # sort 00:08:17.059 08:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.319 08:00:47 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:17.319 08:00:47 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:17.319 08:00:47 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:17.319 08:00:47 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:17.319 08:00:47 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:17.319 08:00:47 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.319 08:00:47 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:17.319 08:00:47 -- target/referrals.sh@26 -- # sort 00:08:17.319 08:00:47 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:17.319 08:00:47 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:17.319 08:00:47 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:17.319 08:00:47 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:17.319 08:00:47 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:17.319 08:00:47 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.319 08:00:47 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:17.579 08:00:47 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:17.579 08:00:47 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:17.579 08:00:47 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:17.579 08:00:47 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:17.579 08:00:47 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.579 08:00:47 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:17.579 08:00:48 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:17.579 08:00:48 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:17.579 08:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.579 08:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:17.579 08:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.579 08:00:48 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:17.579 08:00:48 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:17.579 08:00:48 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:17.579 08:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.579 08:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:17.579 08:00:48 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:17.579 08:00:48 -- target/referrals.sh@21 -- # sort 00:08:17.579 08:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.579 08:00:48 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:17.579 08:00:48 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:17.579 08:00:48 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:17.579 08:00:48 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:17.579 08:00:48 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:17.579 08:00:48 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.579 08:00:48 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:17.579 08:00:48 -- target/referrals.sh@26 -- # sort 00:08:17.579 08:00:48 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:17.579 08:00:48 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:17.579 08:00:48 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:17.579 08:00:48 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:17.579 08:00:48 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:17.579 08:00:48 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.579 08:00:48 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:17.839 08:00:48 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:17.839 08:00:48 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:17.839 08:00:48 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:17.839 08:00:48 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:17.839 08:00:48 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:17.839 08:00:48 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:18.099 08:00:48 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:18.099 08:00:48 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:18.099 08:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.099 08:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:18.099 08:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.099 08:00:48 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:18.099 08:00:48 -- target/referrals.sh@82 -- # jq length 00:08:18.099 08:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.099 08:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:18.099 08:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.099 08:00:48 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:18.099 08:00:48 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:18.099 08:00:48 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:18.099 08:00:48 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:18.099 08:00:48 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:18.099 08:00:48 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:18.099 08:00:48 -- target/referrals.sh@26 -- # sort 00:08:18.099 08:00:48 -- target/referrals.sh@26 -- # echo 00:08:18.099 08:00:48 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:18.099 08:00:48 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:18.099 08:00:48 -- target/referrals.sh@86 -- # nvmftestfini 00:08:18.099 08:00:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:18.099 08:00:48 -- nvmf/common.sh@116 -- # sync 00:08:18.100 08:00:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:18.100 08:00:48 -- nvmf/common.sh@119 -- # set +e 00:08:18.100 08:00:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:18.100 08:00:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:18.100 rmmod nvme_tcp 00:08:18.100 rmmod nvme_fabrics 00:08:18.100 rmmod nvme_keyring 00:08:18.100 08:00:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:18.100 08:00:48 -- nvmf/common.sh@123 -- # set -e 00:08:18.100 08:00:48 -- nvmf/common.sh@124 -- # return 0 00:08:18.100 08:00:48 -- nvmf/common.sh@477 -- # '[' -n 883001 ']' 00:08:18.100 08:00:48 -- nvmf/common.sh@478 -- # killprocess 883001 00:08:18.100 08:00:48 -- common/autotest_common.sh@926 -- # '[' -z 883001 ']' 00:08:18.100 08:00:48 -- common/autotest_common.sh@930 -- # kill -0 883001 00:08:18.100 08:00:48 -- common/autotest_common.sh@931 -- # uname 00:08:18.100 08:00:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:18.100 08:00:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 883001 00:08:18.100 08:00:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:18.100 08:00:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:18.100 08:00:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 883001' 00:08:18.100 killing process with pid 883001 00:08:18.100 08:00:48 -- common/autotest_common.sh@945 -- # kill 883001 00:08:18.100 08:00:48 -- common/autotest_common.sh@950 -- # wait 883001 00:08:18.359 08:00:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:18.359 08:00:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:18.359 08:00:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:18.359 08:00:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.359 08:00:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:18.359 08:00:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.359 08:00:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.359 08:00:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.899 08:00:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:20.899 00:08:20.899 real 0m11.926s 00:08:20.899 user 0m12.727s 00:08:20.899 sys 0m5.749s 00:08:20.900 08:00:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.900 08:00:50 -- common/autotest_common.sh@10 -- # set +x 00:08:20.900 ************************************ 00:08:20.900 END TEST nvmf_referrals 00:08:20.900 ************************************ 00:08:20.900 08:00:50 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:20.900 08:00:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:20.900 08:00:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.900 08:00:50 -- common/autotest_common.sh@10 -- # set +x 00:08:20.900 ************************************ 00:08:20.900 START TEST nvmf_connect_disconnect 00:08:20.900 ************************************ 00:08:20.900 08:00:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:20.900 * Looking for test storage... 00:08:20.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.900 08:00:51 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.900 08:00:51 -- nvmf/common.sh@7 -- # uname -s 00:08:20.900 08:00:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.900 08:00:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.900 08:00:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.900 08:00:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.900 08:00:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.900 08:00:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.900 08:00:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.900 08:00:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.900 08:00:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.900 08:00:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.900 08:00:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:20.900 08:00:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:20.900 08:00:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.900 08:00:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.900 08:00:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.900 08:00:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.900 08:00:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.900 08:00:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.900 08:00:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.900 08:00:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.900 08:00:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.900 08:00:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.900 08:00:51 -- paths/export.sh@5 -- # export PATH 00:08:20.900 08:00:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.900 08:00:51 -- nvmf/common.sh@46 -- # : 0 00:08:20.900 08:00:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:20.900 08:00:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:20.900 08:00:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:20.900 08:00:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.900 08:00:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.900 08:00:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:20.900 08:00:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:20.900 08:00:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:20.900 08:00:51 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:20.900 08:00:51 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:20.900 08:00:51 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:20.900 08:00:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:20.900 08:00:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.900 08:00:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:20.900 08:00:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:20.900 08:00:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:20.900 08:00:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.900 08:00:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:20.900 08:00:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.900 08:00:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:20.900 08:00:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:20.900 08:00:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:20.900 08:00:51 -- common/autotest_common.sh@10 -- # set +x 00:08:27.476 08:00:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:27.476 08:00:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:27.476 08:00:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:27.476 08:00:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:27.476 08:00:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:27.476 08:00:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:27.476 08:00:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:27.476 08:00:57 -- nvmf/common.sh@294 -- # net_devs=() 00:08:27.476 08:00:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:27.476 08:00:57 -- nvmf/common.sh@295 -- # e810=() 00:08:27.476 08:00:57 -- nvmf/common.sh@295 -- # local -ga e810 00:08:27.476 08:00:57 -- nvmf/common.sh@296 -- # x722=() 00:08:27.476 08:00:57 -- nvmf/common.sh@296 -- # local -ga x722 00:08:27.476 08:00:57 -- nvmf/common.sh@297 -- # mlx=() 00:08:27.476 08:00:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:27.476 08:00:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.476 08:00:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.476 08:00:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.476 08:00:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.476 08:00:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.476 08:00:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.476 08:00:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.476 08:00:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.476 08:00:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.476 08:00:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.476 08:00:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.476 08:00:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:27.476 08:00:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:27.476 08:00:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:27.476 08:00:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:27.476 08:00:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:27.476 08:00:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:27.476 08:00:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:27.476 08:00:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:27.476 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:27.476 08:00:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:27.476 08:00:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:27.476 08:00:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.476 08:00:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.476 08:00:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:27.476 08:00:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:27.476 08:00:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:27.476 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:27.476 08:00:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:27.476 08:00:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:27.476 08:00:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.476 08:00:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.476 08:00:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:27.476 08:00:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:27.476 08:00:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:27.476 08:00:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:27.476 08:00:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:27.476 08:00:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.476 08:00:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:27.476 08:00:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.476 08:00:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:27.476 Found net devices under 0000:31:00.0: cvl_0_0 00:08:27.476 08:00:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.476 08:00:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:27.476 08:00:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.476 08:00:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:27.476 08:00:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.476 08:00:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:27.477 Found net devices under 0000:31:00.1: cvl_0_1 00:08:27.477 08:00:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.477 08:00:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:27.477 08:00:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:27.477 08:00:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:27.477 08:00:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:27.477 08:00:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:27.477 08:00:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.477 08:00:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.477 08:00:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.477 08:00:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:27.477 08:00:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.477 08:00:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.477 08:00:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:27.477 08:00:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.477 08:00:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.477 08:00:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:27.477 08:00:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:27.477 08:00:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.477 08:00:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.477 08:00:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.477 08:00:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.477 08:00:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:27.477 08:00:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.738 08:00:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.738 08:00:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.738 08:00:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:27.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:08:27.738 00:08:27.738 --- 10.0.0.2 ping statistics --- 00:08:27.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.738 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:08:27.738 08:00:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:08:27.738 00:08:27.738 --- 10.0.0.1 ping statistics --- 00:08:27.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.738 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:08:27.738 08:00:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.738 08:00:58 -- nvmf/common.sh@410 -- # return 0 00:08:27.738 08:00:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:27.738 08:00:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.738 08:00:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:27.738 08:00:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:27.738 08:00:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.738 08:00:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:27.738 08:00:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:27.738 08:00:58 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:27.738 08:00:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:27.738 08:00:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:27.738 08:00:58 -- common/autotest_common.sh@10 -- # set +x 00:08:27.738 08:00:58 -- nvmf/common.sh@469 -- # nvmfpid=887826 00:08:27.738 08:00:58 -- nvmf/common.sh@470 -- # waitforlisten 887826 00:08:27.738 08:00:58 -- common/autotest_common.sh@819 -- # '[' -z 887826 ']' 00:08:27.738 08:00:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:27.738 08:00:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.738 08:00:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:27.738 08:00:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.738 08:00:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:27.738 08:00:58 -- common/autotest_common.sh@10 -- # set +x 00:08:27.738 [2024-06-11 08:00:58.317348] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:27.738 [2024-06-11 08:00:58.317407] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.738 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.999 [2024-06-11 08:00:58.387081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:27.999 [2024-06-11 08:00:58.459921] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:27.999 [2024-06-11 08:00:58.460058] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.999 [2024-06-11 08:00:58.460068] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.999 [2024-06-11 08:00:58.460076] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.999 [2024-06-11 08:00:58.460234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.999 [2024-06-11 08:00:58.460357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.999 [2024-06-11 08:00:58.460518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.999 [2024-06-11 08:00:58.460519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.571 08:00:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:28.571 08:00:59 -- common/autotest_common.sh@852 -- # return 0 00:08:28.571 08:00:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:28.571 08:00:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:28.571 08:00:59 -- common/autotest_common.sh@10 -- # set +x 00:08:28.571 08:00:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.571 08:00:59 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:28.571 08:00:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.571 08:00:59 -- common/autotest_common.sh@10 -- # set +x 00:08:28.571 [2024-06-11 08:00:59.139642] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.571 08:00:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.571 08:00:59 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:28.571 08:00:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.571 08:00:59 -- common/autotest_common.sh@10 -- # set +x 00:08:28.571 08:00:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.571 08:00:59 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:28.571 08:00:59 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:28.571 08:00:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.571 08:00:59 -- common/autotest_common.sh@10 -- # set +x 00:08:28.571 08:00:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.571 08:00:59 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:28.571 08:00:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.571 08:00:59 -- common/autotest_common.sh@10 -- # set +x 00:08:28.571 08:00:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.571 08:00:59 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:28.571 08:00:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.571 08:00:59 -- common/autotest_common.sh@10 -- # set +x 00:08:28.571 [2024-06-11 08:00:59.198998] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.571 08:00:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.571 08:00:59 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:28.571 08:00:59 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:28.571 08:00:59 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:28.571 08:00:59 -- target/connect_disconnect.sh@34 -- # set +x 00:08:31.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.065 08:04:48 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:18.065 08:04:48 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:18.065 08:04:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:18.065 08:04:48 -- nvmf/common.sh@116 -- # sync 00:12:18.065 08:04:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:18.065 08:04:48 -- nvmf/common.sh@119 -- # set +e 00:12:18.065 08:04:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:18.065 08:04:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:18.065 rmmod nvme_tcp 00:12:18.065 rmmod nvme_fabrics 00:12:18.065 rmmod nvme_keyring 00:12:18.065 08:04:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:18.065 08:04:48 -- nvmf/common.sh@123 -- # set -e 00:12:18.065 08:04:48 -- nvmf/common.sh@124 -- # return 0 00:12:18.065 08:04:48 -- nvmf/common.sh@477 -- # '[' -n 887826 ']' 00:12:18.065 08:04:48 -- nvmf/common.sh@478 -- # killprocess 887826 00:12:18.065 08:04:48 -- common/autotest_common.sh@926 -- # '[' -z 887826 ']' 00:12:18.065 08:04:48 -- common/autotest_common.sh@930 -- # kill -0 887826 00:12:18.065 08:04:48 -- common/autotest_common.sh@931 -- # uname 00:12:18.065 08:04:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:18.066 08:04:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 887826 00:12:18.066 08:04:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:18.066 08:04:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:18.066 08:04:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 887826' 00:12:18.066 killing process with pid 887826 00:12:18.066 08:04:48 -- common/autotest_common.sh@945 -- # kill 887826 00:12:18.066 08:04:48 -- common/autotest_common.sh@950 -- # wait 887826 00:12:18.327 08:04:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:18.327 08:04:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:18.327 08:04:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:18.327 08:04:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:18.327 08:04:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:18.327 08:04:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.327 08:04:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.327 08:04:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.244 08:04:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:20.244 00:12:20.244 real 3m59.872s 00:12:20.244 user 15m15.759s 00:12:20.244 sys 0m18.717s 00:12:20.244 08:04:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:20.244 08:04:50 -- common/autotest_common.sh@10 -- # set +x 00:12:20.244 ************************************ 00:12:20.244 END TEST nvmf_connect_disconnect 00:12:20.244 ************************************ 00:12:20.505 08:04:50 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:20.505 08:04:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:20.505 08:04:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:20.505 08:04:50 -- common/autotest_common.sh@10 -- # set +x 00:12:20.505 ************************************ 00:12:20.505 START TEST nvmf_multitarget 00:12:20.505 ************************************ 00:12:20.505 08:04:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:20.505 * Looking for test storage... 00:12:20.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.505 08:04:50 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.505 08:04:50 -- nvmf/common.sh@7 -- # uname -s 00:12:20.505 08:04:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.505 08:04:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.505 08:04:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.505 08:04:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.505 08:04:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.505 08:04:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.505 08:04:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.505 08:04:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.505 08:04:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.505 08:04:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.505 08:04:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:20.505 08:04:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:20.505 08:04:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.505 08:04:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.505 08:04:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.505 08:04:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:20.505 08:04:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.505 08:04:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.505 08:04:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.505 08:04:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.505 08:04:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.505 08:04:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.505 08:04:51 -- paths/export.sh@5 -- # export PATH 00:12:20.505 08:04:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.505 08:04:51 -- nvmf/common.sh@46 -- # : 0 00:12:20.505 08:04:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:20.505 08:04:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:20.505 08:04:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:20.505 08:04:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.505 08:04:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.505 08:04:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:20.505 08:04:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:20.505 08:04:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:20.505 08:04:51 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:20.505 08:04:51 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:20.505 08:04:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:20.505 08:04:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.505 08:04:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:20.505 08:04:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:20.505 08:04:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:20.505 08:04:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.505 08:04:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.505 08:04:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.505 08:04:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:20.505 08:04:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:20.505 08:04:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:20.505 08:04:51 -- common/autotest_common.sh@10 -- # set +x 00:12:28.668 08:04:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:28.668 08:04:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:28.668 08:04:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:28.668 08:04:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:28.668 08:04:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:28.668 08:04:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:28.668 08:04:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:28.668 08:04:57 -- nvmf/common.sh@294 -- # net_devs=() 00:12:28.668 08:04:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:28.668 08:04:57 -- nvmf/common.sh@295 -- # e810=() 00:12:28.668 08:04:57 -- nvmf/common.sh@295 -- # local -ga e810 00:12:28.668 08:04:57 -- nvmf/common.sh@296 -- # x722=() 00:12:28.668 08:04:57 -- nvmf/common.sh@296 -- # local -ga x722 00:12:28.668 08:04:57 -- nvmf/common.sh@297 -- # mlx=() 00:12:28.668 08:04:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:28.668 08:04:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.668 08:04:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.668 08:04:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.668 08:04:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.668 08:04:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.668 08:04:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.668 08:04:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.668 08:04:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.668 08:04:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.668 08:04:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.668 08:04:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.668 08:04:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:28.668 08:04:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:28.668 08:04:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:28.668 08:04:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:28.668 08:04:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:28.668 08:04:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:28.668 08:04:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:28.668 08:04:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:28.668 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:28.668 08:04:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:28.668 08:04:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:28.668 08:04:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.668 08:04:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.668 08:04:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:28.668 08:04:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:28.668 08:04:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:28.668 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:28.668 08:04:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:28.668 08:04:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:28.668 08:04:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.668 08:04:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.668 08:04:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:28.668 08:04:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:28.668 08:04:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:28.668 08:04:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:28.668 08:04:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:28.668 08:04:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.668 08:04:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:28.668 08:04:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.668 08:04:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:28.668 Found net devices under 0000:31:00.0: cvl_0_0 00:12:28.668 08:04:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.668 08:04:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:28.668 08:04:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.668 08:04:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:28.668 08:04:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.668 08:04:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:28.668 Found net devices under 0000:31:00.1: cvl_0_1 00:12:28.668 08:04:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.668 08:04:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:28.668 08:04:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:28.668 08:04:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:28.668 08:04:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:28.668 08:04:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:28.668 08:04:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.668 08:04:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.668 08:04:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.668 08:04:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:28.668 08:04:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.668 08:04:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.669 08:04:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:28.669 08:04:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.669 08:04:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.669 08:04:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:28.669 08:04:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:28.669 08:04:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.669 08:04:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.669 08:04:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.669 08:04:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.669 08:04:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:28.669 08:04:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:28.669 08:04:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:28.669 08:04:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:28.669 08:04:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:28.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:12:28.669 00:12:28.669 --- 10.0.0.2 ping statistics --- 00:12:28.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.669 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:12:28.669 08:04:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:28.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:12:28.669 00:12:28.669 --- 10.0.0.1 ping statistics --- 00:12:28.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.669 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:12:28.669 08:04:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.669 08:04:58 -- nvmf/common.sh@410 -- # return 0 00:12:28.669 08:04:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:28.669 08:04:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.669 08:04:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:28.669 08:04:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:28.669 08:04:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.669 08:04:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:28.669 08:04:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:28.669 08:04:58 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:28.669 08:04:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:28.669 08:04:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:28.669 08:04:58 -- common/autotest_common.sh@10 -- # set +x 00:12:28.669 08:04:58 -- nvmf/common.sh@469 -- # nvmfpid=939789 00:12:28.669 08:04:58 -- nvmf/common.sh@470 -- # waitforlisten 939789 00:12:28.669 08:04:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:28.669 08:04:58 -- common/autotest_common.sh@819 -- # '[' -z 939789 ']' 00:12:28.669 08:04:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.669 08:04:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:28.669 08:04:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.669 08:04:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:28.669 08:04:58 -- common/autotest_common.sh@10 -- # set +x 00:12:28.669 [2024-06-11 08:04:58.284034] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:28.669 [2024-06-11 08:04:58.284090] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.669 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.669 [2024-06-11 08:04:58.354623] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.669 [2024-06-11 08:04:58.428430] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:28.669 [2024-06-11 08:04:58.428572] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.669 [2024-06-11 08:04:58.428583] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.669 [2024-06-11 08:04:58.428591] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.669 [2024-06-11 08:04:58.428759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.669 [2024-06-11 08:04:58.428866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.669 [2024-06-11 08:04:58.429009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.669 [2024-06-11 08:04:58.429010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.669 08:04:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:28.669 08:04:59 -- common/autotest_common.sh@852 -- # return 0 00:12:28.669 08:04:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:28.669 08:04:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:28.669 08:04:59 -- common/autotest_common.sh@10 -- # set +x 00:12:28.669 08:04:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.669 08:04:59 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:28.669 08:04:59 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:28.669 08:04:59 -- target/multitarget.sh@21 -- # jq length 00:12:28.669 08:04:59 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:28.669 08:04:59 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:28.669 "nvmf_tgt_1" 00:12:28.669 08:04:59 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:28.930 "nvmf_tgt_2" 00:12:28.931 08:04:59 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:28.931 08:04:59 -- target/multitarget.sh@28 -- # jq length 00:12:28.931 08:04:59 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:28.931 08:04:59 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:28.931 true 00:12:29.192 08:04:59 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:29.192 true 00:12:29.192 08:04:59 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:29.192 08:04:59 -- target/multitarget.sh@35 -- # jq length 00:12:29.192 08:04:59 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:29.192 08:04:59 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:29.192 08:04:59 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:29.192 08:04:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:29.192 08:04:59 -- nvmf/common.sh@116 -- # sync 00:12:29.192 08:04:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:29.192 08:04:59 -- nvmf/common.sh@119 -- # set +e 00:12:29.192 08:04:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:29.192 08:04:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:29.192 rmmod nvme_tcp 00:12:29.192 rmmod nvme_fabrics 00:12:29.192 rmmod nvme_keyring 00:12:29.453 08:04:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:29.453 08:04:59 -- nvmf/common.sh@123 -- # set -e 00:12:29.453 08:04:59 -- nvmf/common.sh@124 -- # return 0 00:12:29.453 08:04:59 -- nvmf/common.sh@477 -- # '[' -n 939789 ']' 00:12:29.453 08:04:59 -- nvmf/common.sh@478 -- # killprocess 939789 00:12:29.453 08:04:59 -- common/autotest_common.sh@926 -- # '[' -z 939789 ']' 00:12:29.453 08:04:59 -- common/autotest_common.sh@930 -- # kill -0 939789 00:12:29.453 08:04:59 -- common/autotest_common.sh@931 -- # uname 00:12:29.453 08:04:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:29.453 08:04:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 939789 00:12:29.453 08:04:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:29.453 08:04:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:29.453 08:04:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 939789' 00:12:29.453 killing process with pid 939789 00:12:29.453 08:04:59 -- common/autotest_common.sh@945 -- # kill 939789 00:12:29.453 08:04:59 -- common/autotest_common.sh@950 -- # wait 939789 00:12:29.453 08:05:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:29.453 08:05:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:29.453 08:05:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:29.453 08:05:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:29.453 08:05:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:29.453 08:05:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.453 08:05:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.453 08:05:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.005 08:05:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:32.005 00:12:32.005 real 0m11.213s 00:12:32.005 user 0m9.236s 00:12:32.005 sys 0m5.724s 00:12:32.005 08:05:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.005 08:05:02 -- common/autotest_common.sh@10 -- # set +x 00:12:32.005 ************************************ 00:12:32.005 END TEST nvmf_multitarget 00:12:32.005 ************************************ 00:12:32.005 08:05:02 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:32.005 08:05:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:32.005 08:05:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:32.005 08:05:02 -- common/autotest_common.sh@10 -- # set +x 00:12:32.005 ************************************ 00:12:32.005 START TEST nvmf_rpc 00:12:32.005 ************************************ 00:12:32.005 08:05:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:32.005 * Looking for test storage... 00:12:32.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.005 08:05:02 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.005 08:05:02 -- nvmf/common.sh@7 -- # uname -s 00:12:32.006 08:05:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.006 08:05:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.006 08:05:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.006 08:05:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.006 08:05:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.006 08:05:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.006 08:05:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.006 08:05:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.006 08:05:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.006 08:05:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.006 08:05:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:32.006 08:05:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:32.006 08:05:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.006 08:05:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.006 08:05:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.006 08:05:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.006 08:05:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.006 08:05:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.006 08:05:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.006 08:05:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.006 08:05:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.006 08:05:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.006 08:05:02 -- paths/export.sh@5 -- # export PATH 00:12:32.006 08:05:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.006 08:05:02 -- nvmf/common.sh@46 -- # : 0 00:12:32.006 08:05:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:32.006 08:05:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:32.006 08:05:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:32.006 08:05:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.006 08:05:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.006 08:05:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:32.006 08:05:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:32.006 08:05:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:32.006 08:05:02 -- target/rpc.sh@11 -- # loops=5 00:12:32.006 08:05:02 -- target/rpc.sh@23 -- # nvmftestinit 00:12:32.006 08:05:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:32.006 08:05:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.006 08:05:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:32.006 08:05:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:32.006 08:05:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:32.006 08:05:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.006 08:05:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.006 08:05:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.006 08:05:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:32.006 08:05:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:32.006 08:05:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:32.006 08:05:02 -- common/autotest_common.sh@10 -- # set +x 00:12:38.595 08:05:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:38.595 08:05:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:38.595 08:05:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:38.595 08:05:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:38.595 08:05:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:38.595 08:05:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:38.595 08:05:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:38.595 08:05:09 -- nvmf/common.sh@294 -- # net_devs=() 00:12:38.595 08:05:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:38.595 08:05:09 -- nvmf/common.sh@295 -- # e810=() 00:12:38.595 08:05:09 -- nvmf/common.sh@295 -- # local -ga e810 00:12:38.595 08:05:09 -- nvmf/common.sh@296 -- # x722=() 00:12:38.595 08:05:09 -- nvmf/common.sh@296 -- # local -ga x722 00:12:38.595 08:05:09 -- nvmf/common.sh@297 -- # mlx=() 00:12:38.595 08:05:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:38.595 08:05:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.595 08:05:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.595 08:05:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.595 08:05:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.595 08:05:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.595 08:05:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.595 08:05:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.595 08:05:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.595 08:05:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.595 08:05:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.595 08:05:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.595 08:05:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:38.595 08:05:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:38.595 08:05:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:38.595 08:05:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:38.595 08:05:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:38.595 08:05:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:38.595 08:05:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:38.595 08:05:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:38.595 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:38.595 08:05:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:38.595 08:05:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:38.595 08:05:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.595 08:05:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.595 08:05:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:38.595 08:05:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:38.595 08:05:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:38.595 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:38.595 08:05:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:38.595 08:05:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:38.595 08:05:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.595 08:05:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.595 08:05:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:38.595 08:05:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:38.595 08:05:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:38.595 08:05:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:38.595 08:05:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:38.595 08:05:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.595 08:05:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:38.595 08:05:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.595 08:05:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:38.595 Found net devices under 0000:31:00.0: cvl_0_0 00:12:38.595 08:05:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.595 08:05:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:38.595 08:05:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.595 08:05:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:38.595 08:05:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.595 08:05:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:38.595 Found net devices under 0000:31:00.1: cvl_0_1 00:12:38.595 08:05:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.595 08:05:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:38.595 08:05:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:38.595 08:05:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:38.595 08:05:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:38.595 08:05:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:38.595 08:05:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.595 08:05:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.595 08:05:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.595 08:05:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:38.595 08:05:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.595 08:05:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.595 08:05:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:38.595 08:05:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.595 08:05:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.595 08:05:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:38.595 08:05:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:38.595 08:05:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.856 08:05:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.856 08:05:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.856 08:05:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.856 08:05:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:38.856 08:05:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.856 08:05:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.856 08:05:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:39.116 08:05:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:39.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:12:39.116 00:12:39.116 --- 10.0.0.2 ping statistics --- 00:12:39.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.116 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:12:39.117 08:05:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:39.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:12:39.117 00:12:39.117 --- 10.0.0.1 ping statistics --- 00:12:39.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.117 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:12:39.117 08:05:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.117 08:05:09 -- nvmf/common.sh@410 -- # return 0 00:12:39.117 08:05:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:39.117 08:05:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.117 08:05:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:39.117 08:05:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:39.117 08:05:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.117 08:05:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:39.117 08:05:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:39.117 08:05:09 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:39.117 08:05:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:39.117 08:05:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:39.117 08:05:09 -- common/autotest_common.sh@10 -- # set +x 00:12:39.117 08:05:09 -- nvmf/common.sh@469 -- # nvmfpid=944557 00:12:39.117 08:05:09 -- nvmf/common.sh@470 -- # waitforlisten 944557 00:12:39.117 08:05:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:39.117 08:05:09 -- common/autotest_common.sh@819 -- # '[' -z 944557 ']' 00:12:39.117 08:05:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.117 08:05:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:39.117 08:05:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.117 08:05:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:39.117 08:05:09 -- common/autotest_common.sh@10 -- # set +x 00:12:39.117 [2024-06-11 08:05:09.602455] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:39.117 [2024-06-11 08:05:09.602504] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.117 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.117 [2024-06-11 08:05:09.669253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.117 [2024-06-11 08:05:09.735817] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:39.117 [2024-06-11 08:05:09.735946] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.117 [2024-06-11 08:05:09.735956] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.117 [2024-06-11 08:05:09.735964] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.117 [2024-06-11 08:05:09.736102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.117 [2024-06-11 08:05:09.736223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.117 [2024-06-11 08:05:09.736379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.117 [2024-06-11 08:05:09.736379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:40.060 08:05:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:40.060 08:05:10 -- common/autotest_common.sh@852 -- # return 0 00:12:40.060 08:05:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:40.060 08:05:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:40.060 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:40.060 08:05:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.060 08:05:10 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:40.061 08:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.061 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:40.061 08:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.061 08:05:10 -- target/rpc.sh@26 -- # stats='{ 00:12:40.061 "tick_rate": 2400000000, 00:12:40.061 "poll_groups": [ 00:12:40.061 { 00:12:40.061 "name": "nvmf_tgt_poll_group_0", 00:12:40.061 "admin_qpairs": 0, 00:12:40.061 "io_qpairs": 0, 00:12:40.061 "current_admin_qpairs": 0, 00:12:40.061 "current_io_qpairs": 0, 00:12:40.061 "pending_bdev_io": 0, 00:12:40.061 "completed_nvme_io": 0, 00:12:40.061 "transports": [] 00:12:40.061 }, 00:12:40.061 { 00:12:40.061 "name": "nvmf_tgt_poll_group_1", 00:12:40.061 "admin_qpairs": 0, 00:12:40.061 "io_qpairs": 0, 00:12:40.061 "current_admin_qpairs": 0, 00:12:40.061 "current_io_qpairs": 0, 00:12:40.061 "pending_bdev_io": 0, 00:12:40.061 "completed_nvme_io": 0, 00:12:40.061 "transports": [] 00:12:40.061 }, 00:12:40.061 { 00:12:40.061 "name": "nvmf_tgt_poll_group_2", 00:12:40.061 "admin_qpairs": 0, 00:12:40.061 "io_qpairs": 0, 00:12:40.061 "current_admin_qpairs": 0, 00:12:40.061 "current_io_qpairs": 0, 00:12:40.061 "pending_bdev_io": 0, 00:12:40.061 "completed_nvme_io": 0, 00:12:40.061 "transports": [] 00:12:40.061 }, 00:12:40.061 { 00:12:40.061 "name": "nvmf_tgt_poll_group_3", 00:12:40.061 "admin_qpairs": 0, 00:12:40.061 "io_qpairs": 0, 00:12:40.061 "current_admin_qpairs": 0, 00:12:40.061 "current_io_qpairs": 0, 00:12:40.061 "pending_bdev_io": 0, 00:12:40.061 "completed_nvme_io": 0, 00:12:40.061 "transports": [] 00:12:40.061 } 00:12:40.061 ] 00:12:40.061 }' 00:12:40.061 08:05:10 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:40.061 08:05:10 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:40.061 08:05:10 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:40.061 08:05:10 -- target/rpc.sh@15 -- # wc -l 00:12:40.061 08:05:10 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:40.061 08:05:10 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:40.061 08:05:10 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:40.061 08:05:10 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:40.061 08:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.061 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:40.061 [2024-06-11 08:05:10.528373] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.061 08:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.061 08:05:10 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:40.061 08:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.061 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:40.061 08:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.061 08:05:10 -- target/rpc.sh@33 -- # stats='{ 00:12:40.061 "tick_rate": 2400000000, 00:12:40.061 "poll_groups": [ 00:12:40.061 { 00:12:40.061 "name": "nvmf_tgt_poll_group_0", 00:12:40.061 "admin_qpairs": 0, 00:12:40.061 "io_qpairs": 0, 00:12:40.061 "current_admin_qpairs": 0, 00:12:40.061 "current_io_qpairs": 0, 00:12:40.061 "pending_bdev_io": 0, 00:12:40.061 "completed_nvme_io": 0, 00:12:40.061 "transports": [ 00:12:40.061 { 00:12:40.061 "trtype": "TCP" 00:12:40.061 } 00:12:40.061 ] 00:12:40.061 }, 00:12:40.061 { 00:12:40.061 "name": "nvmf_tgt_poll_group_1", 00:12:40.061 "admin_qpairs": 0, 00:12:40.061 "io_qpairs": 0, 00:12:40.061 "current_admin_qpairs": 0, 00:12:40.061 "current_io_qpairs": 0, 00:12:40.061 "pending_bdev_io": 0, 00:12:40.061 "completed_nvme_io": 0, 00:12:40.061 "transports": [ 00:12:40.061 { 00:12:40.061 "trtype": "TCP" 00:12:40.061 } 00:12:40.061 ] 00:12:40.061 }, 00:12:40.061 { 00:12:40.061 "name": "nvmf_tgt_poll_group_2", 00:12:40.061 "admin_qpairs": 0, 00:12:40.061 "io_qpairs": 0, 00:12:40.061 "current_admin_qpairs": 0, 00:12:40.061 "current_io_qpairs": 0, 00:12:40.061 "pending_bdev_io": 0, 00:12:40.061 "completed_nvme_io": 0, 00:12:40.061 "transports": [ 00:12:40.061 { 00:12:40.061 "trtype": "TCP" 00:12:40.061 } 00:12:40.061 ] 00:12:40.061 }, 00:12:40.061 { 00:12:40.061 "name": "nvmf_tgt_poll_group_3", 00:12:40.061 "admin_qpairs": 0, 00:12:40.061 "io_qpairs": 0, 00:12:40.061 "current_admin_qpairs": 0, 00:12:40.061 "current_io_qpairs": 0, 00:12:40.061 "pending_bdev_io": 0, 00:12:40.061 "completed_nvme_io": 0, 00:12:40.061 "transports": [ 00:12:40.061 { 00:12:40.061 "trtype": "TCP" 00:12:40.061 } 00:12:40.061 ] 00:12:40.061 } 00:12:40.061 ] 00:12:40.061 }' 00:12:40.061 08:05:10 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:40.061 08:05:10 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:40.061 08:05:10 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:40.061 08:05:10 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:40.061 08:05:10 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:40.061 08:05:10 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:40.061 08:05:10 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:40.061 08:05:10 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:40.061 08:05:10 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:40.061 08:05:10 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:40.061 08:05:10 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:40.061 08:05:10 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:40.061 08:05:10 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:40.061 08:05:10 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:40.061 08:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.061 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:40.061 Malloc1 00:12:40.061 08:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.061 08:05:10 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:40.061 08:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.061 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:40.061 08:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.061 08:05:10 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.061 08:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.061 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:40.061 08:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.061 08:05:10 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:40.061 08:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.061 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:40.323 08:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.323 08:05:10 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.323 08:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.323 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:40.323 [2024-06-11 08:05:10.715990] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.323 08:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.323 08:05:10 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:40.323 08:05:10 -- common/autotest_common.sh@640 -- # local es=0 00:12:40.323 08:05:10 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:40.323 08:05:10 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:40.323 08:05:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:40.323 08:05:10 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:40.323 08:05:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:40.323 08:05:10 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:40.323 08:05:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:40.323 08:05:10 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:40.323 08:05:10 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:40.323 08:05:10 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:40.323 [2024-06-11 08:05:10.742846] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:40.323 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:40.323 could not add new controller: failed to write to nvme-fabrics device 00:12:40.323 08:05:10 -- common/autotest_common.sh@643 -- # es=1 00:12:40.323 08:05:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:40.323 08:05:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:40.323 08:05:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:40.323 08:05:10 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:40.323 08:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.323 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:40.323 08:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.323 08:05:10 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.709 08:05:12 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.709 08:05:12 -- common/autotest_common.sh@1177 -- # local i=0 00:12:41.709 08:05:12 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.709 08:05:12 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:41.709 08:05:12 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:43.622 08:05:14 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:43.622 08:05:14 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:43.622 08:05:14 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.622 08:05:14 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:43.622 08:05:14 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.622 08:05:14 -- common/autotest_common.sh@1187 -- # return 0 00:12:43.622 08:05:14 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.883 08:05:14 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.883 08:05:14 -- common/autotest_common.sh@1198 -- # local i=0 00:12:43.883 08:05:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:43.883 08:05:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.883 08:05:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:43.883 08:05:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.883 08:05:14 -- common/autotest_common.sh@1210 -- # return 0 00:12:43.883 08:05:14 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:43.883 08:05:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.883 08:05:14 -- common/autotest_common.sh@10 -- # set +x 00:12:43.883 08:05:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.883 08:05:14 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.883 08:05:14 -- common/autotest_common.sh@640 -- # local es=0 00:12:43.883 08:05:14 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.883 08:05:14 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:43.883 08:05:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:43.883 08:05:14 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:43.883 08:05:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:43.883 08:05:14 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:43.883 08:05:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:43.883 08:05:14 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:43.883 08:05:14 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:43.883 08:05:14 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.883 [2024-06-11 08:05:14.385884] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:43.883 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:43.883 could not add new controller: failed to write to nvme-fabrics device 00:12:43.883 08:05:14 -- common/autotest_common.sh@643 -- # es=1 00:12:43.883 08:05:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:43.883 08:05:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:43.883 08:05:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:43.883 08:05:14 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:43.883 08:05:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.883 08:05:14 -- common/autotest_common.sh@10 -- # set +x 00:12:43.883 08:05:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.883 08:05:14 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.269 08:05:15 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.269 08:05:15 -- common/autotest_common.sh@1177 -- # local i=0 00:12:45.269 08:05:15 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.269 08:05:15 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:45.269 08:05:15 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:47.226 08:05:17 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:47.226 08:05:17 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:47.226 08:05:17 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.226 08:05:17 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:47.226 08:05:17 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.226 08:05:17 -- common/autotest_common.sh@1187 -- # return 0 00:12:47.226 08:05:17 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.487 08:05:17 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.487 08:05:17 -- common/autotest_common.sh@1198 -- # local i=0 00:12:47.487 08:05:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:47.487 08:05:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.487 08:05:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:47.487 08:05:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.487 08:05:17 -- common/autotest_common.sh@1210 -- # return 0 00:12:47.487 08:05:17 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.487 08:05:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.487 08:05:17 -- common/autotest_common.sh@10 -- # set +x 00:12:47.487 08:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.487 08:05:18 -- target/rpc.sh@81 -- # seq 1 5 00:12:47.487 08:05:18 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:47.487 08:05:18 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.487 08:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.487 08:05:18 -- common/autotest_common.sh@10 -- # set +x 00:12:47.487 08:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.487 08:05:18 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.487 08:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.487 08:05:18 -- common/autotest_common.sh@10 -- # set +x 00:12:47.487 [2024-06-11 08:05:18.025864] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.487 08:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.487 08:05:18 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:47.487 08:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.487 08:05:18 -- common/autotest_common.sh@10 -- # set +x 00:12:47.487 08:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.487 08:05:18 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.487 08:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.487 08:05:18 -- common/autotest_common.sh@10 -- # set +x 00:12:47.487 08:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.487 08:05:18 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.398 08:05:19 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.398 08:05:19 -- common/autotest_common.sh@1177 -- # local i=0 00:12:49.398 08:05:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.398 08:05:19 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:49.398 08:05:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:51.310 08:05:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:51.310 08:05:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:51.310 08:05:21 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.310 08:05:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:51.310 08:05:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.310 08:05:21 -- common/autotest_common.sh@1187 -- # return 0 00:12:51.310 08:05:21 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.310 08:05:21 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.310 08:05:21 -- common/autotest_common.sh@1198 -- # local i=0 00:12:51.310 08:05:21 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:51.310 08:05:21 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.310 08:05:21 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:51.310 08:05:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.311 08:05:21 -- common/autotest_common.sh@1210 -- # return 0 00:12:51.311 08:05:21 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:51.311 08:05:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.311 08:05:21 -- common/autotest_common.sh@10 -- # set +x 00:12:51.311 08:05:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.311 08:05:21 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.311 08:05:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.311 08:05:21 -- common/autotest_common.sh@10 -- # set +x 00:12:51.311 08:05:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.311 08:05:21 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:51.311 08:05:21 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.311 08:05:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.311 08:05:21 -- common/autotest_common.sh@10 -- # set +x 00:12:51.311 08:05:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.311 08:05:21 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.311 08:05:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.311 08:05:21 -- common/autotest_common.sh@10 -- # set +x 00:12:51.311 [2024-06-11 08:05:21.717720] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.311 08:05:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.311 08:05:21 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:51.311 08:05:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.311 08:05:21 -- common/autotest_common.sh@10 -- # set +x 00:12:51.311 08:05:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.311 08:05:21 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.311 08:05:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.311 08:05:21 -- common/autotest_common.sh@10 -- # set +x 00:12:51.311 08:05:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.311 08:05:21 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.722 08:05:23 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.722 08:05:23 -- common/autotest_common.sh@1177 -- # local i=0 00:12:52.722 08:05:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.722 08:05:23 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:52.722 08:05:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:54.631 08:05:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:54.631 08:05:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:54.631 08:05:25 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.631 08:05:25 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:54.631 08:05:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.631 08:05:25 -- common/autotest_common.sh@1187 -- # return 0 00:12:54.631 08:05:25 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:54.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.891 08:05:25 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:54.891 08:05:25 -- common/autotest_common.sh@1198 -- # local i=0 00:12:54.891 08:05:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:54.891 08:05:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.891 08:05:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:54.891 08:05:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.891 08:05:25 -- common/autotest_common.sh@1210 -- # return 0 00:12:54.891 08:05:25 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:54.891 08:05:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.891 08:05:25 -- common/autotest_common.sh@10 -- # set +x 00:12:54.891 08:05:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.891 08:05:25 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.891 08:05:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.891 08:05:25 -- common/autotest_common.sh@10 -- # set +x 00:12:54.891 08:05:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.891 08:05:25 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:54.891 08:05:25 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:54.891 08:05:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.891 08:05:25 -- common/autotest_common.sh@10 -- # set +x 00:12:54.891 08:05:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.891 08:05:25 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.891 08:05:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.891 08:05:25 -- common/autotest_common.sh@10 -- # set +x 00:12:54.891 [2024-06-11 08:05:25.376830] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.891 08:05:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.891 08:05:25 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:54.891 08:05:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.891 08:05:25 -- common/autotest_common.sh@10 -- # set +x 00:12:54.891 08:05:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.891 08:05:25 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:54.891 08:05:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.891 08:05:25 -- common/autotest_common.sh@10 -- # set +x 00:12:54.891 08:05:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.891 08:05:25 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.471 08:05:26 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.471 08:05:26 -- common/autotest_common.sh@1177 -- # local i=0 00:12:56.471 08:05:26 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.471 08:05:26 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:56.471 08:05:26 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:58.461 08:05:28 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:58.461 08:05:28 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:58.461 08:05:28 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.461 08:05:28 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:58.461 08:05:28 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.461 08:05:28 -- common/autotest_common.sh@1187 -- # return 0 00:12:58.461 08:05:28 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.461 08:05:28 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.461 08:05:28 -- common/autotest_common.sh@1198 -- # local i=0 00:12:58.461 08:05:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:58.461 08:05:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.461 08:05:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:58.461 08:05:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.461 08:05:28 -- common/autotest_common.sh@1210 -- # return 0 00:12:58.461 08:05:28 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:58.461 08:05:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.461 08:05:28 -- common/autotest_common.sh@10 -- # set +x 00:12:58.461 08:05:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.461 08:05:29 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.461 08:05:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.461 08:05:29 -- common/autotest_common.sh@10 -- # set +x 00:12:58.461 08:05:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.461 08:05:29 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:58.461 08:05:29 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.461 08:05:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.461 08:05:29 -- common/autotest_common.sh@10 -- # set +x 00:12:58.461 08:05:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.461 08:05:29 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.461 08:05:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.461 08:05:29 -- common/autotest_common.sh@10 -- # set +x 00:12:58.461 [2024-06-11 08:05:29.032799] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.461 08:05:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.461 08:05:29 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:58.461 08:05:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.461 08:05:29 -- common/autotest_common.sh@10 -- # set +x 00:12:58.461 08:05:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.461 08:05:29 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.461 08:05:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.461 08:05:29 -- common/autotest_common.sh@10 -- # set +x 00:12:58.461 08:05:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.461 08:05:29 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:00.371 08:05:30 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:00.371 08:05:30 -- common/autotest_common.sh@1177 -- # local i=0 00:13:00.371 08:05:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.371 08:05:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:00.371 08:05:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:02.282 08:05:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:02.282 08:05:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:02.282 08:05:32 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:02.282 08:05:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:02.282 08:05:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:02.282 08:05:32 -- common/autotest_common.sh@1187 -- # return 0 00:13:02.282 08:05:32 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:02.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.282 08:05:32 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:02.282 08:05:32 -- common/autotest_common.sh@1198 -- # local i=0 00:13:02.282 08:05:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:02.282 08:05:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.282 08:05:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:02.282 08:05:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.282 08:05:32 -- common/autotest_common.sh@1210 -- # return 0 00:13:02.282 08:05:32 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:02.282 08:05:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.282 08:05:32 -- common/autotest_common.sh@10 -- # set +x 00:13:02.282 08:05:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.282 08:05:32 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.282 08:05:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.282 08:05:32 -- common/autotest_common.sh@10 -- # set +x 00:13:02.282 08:05:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.282 08:05:32 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:02.282 08:05:32 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:02.282 08:05:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.282 08:05:32 -- common/autotest_common.sh@10 -- # set +x 00:13:02.282 08:05:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.282 08:05:32 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.282 08:05:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.282 08:05:32 -- common/autotest_common.sh@10 -- # set +x 00:13:02.282 [2024-06-11 08:05:32.729177] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.282 08:05:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.282 08:05:32 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:02.282 08:05:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.282 08:05:32 -- common/autotest_common.sh@10 -- # set +x 00:13:02.282 08:05:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.282 08:05:32 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:02.282 08:05:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.282 08:05:32 -- common/autotest_common.sh@10 -- # set +x 00:13:02.282 08:05:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.282 08:05:32 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.667 08:05:34 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.667 08:05:34 -- common/autotest_common.sh@1177 -- # local i=0 00:13:03.667 08:05:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.667 08:05:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:03.667 08:05:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:05.578 08:05:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:05.578 08:05:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:05.578 08:05:36 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:05.578 08:05:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:05.578 08:05:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.578 08:05:36 -- common/autotest_common.sh@1187 -- # return 0 00:13:05.578 08:05:36 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.838 08:05:36 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:05.838 08:05:36 -- common/autotest_common.sh@1198 -- # local i=0 00:13:05.838 08:05:36 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:05.839 08:05:36 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.839 08:05:36 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:05.839 08:05:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.839 08:05:36 -- common/autotest_common.sh@1210 -- # return 0 00:13:05.839 08:05:36 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:05.839 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.839 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:05.839 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.839 08:05:36 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.839 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.839 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:05.839 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.839 08:05:36 -- target/rpc.sh@99 -- # seq 1 5 00:13:05.839 08:05:36 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:05.839 08:05:36 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:05.839 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.839 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:05.839 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.839 08:05:36 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.839 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.839 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:05.839 [2024-06-11 08:05:36.396984] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.839 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.839 08:05:36 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:05.839 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.839 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:05.839 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.839 08:05:36 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:05.839 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.839 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:05.839 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.839 08:05:36 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.839 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.839 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:05.839 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.839 08:05:36 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.839 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.839 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:05.839 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.839 08:05:36 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:05.839 08:05:36 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:05.839 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.839 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:05.839 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.839 08:05:36 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.839 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.839 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:05.839 [2024-06-11 08:05:36.453119] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.839 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.839 08:05:36 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:05.839 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.839 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:05.839 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.839 08:05:36 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:05.839 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.839 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:05.839 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.839 08:05:36 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.839 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.839 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.100 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.100 08:05:36 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.100 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.100 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.100 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.100 08:05:36 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:06.100 08:05:36 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:06.100 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.100 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.100 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.100 08:05:36 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.100 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.100 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.100 [2024-06-11 08:05:36.513308] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.100 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.100 08:05:36 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:06.100 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.100 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.100 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.100 08:05:36 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:06.100 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.100 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.100 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.100 08:05:36 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.100 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.100 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.100 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.100 08:05:36 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.100 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.100 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.100 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.100 08:05:36 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:06.100 08:05:36 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:06.100 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.100 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.100 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.100 08:05:36 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.100 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.100 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.100 [2024-06-11 08:05:36.569500] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.100 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.100 08:05:36 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:06.100 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.100 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.100 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.100 08:05:36 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:06.100 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.100 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.100 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.100 08:05:36 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.100 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.100 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.100 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.100 08:05:36 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.100 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.100 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.100 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.100 08:05:36 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:06.100 08:05:36 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:06.100 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.100 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.100 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.100 08:05:36 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.100 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.100 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.100 [2024-06-11 08:05:36.625695] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.100 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.100 08:05:36 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:06.100 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.100 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.100 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.100 08:05:36 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:06.100 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.100 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.100 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.101 08:05:36 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.101 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.101 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.101 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.101 08:05:36 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.101 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.101 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.101 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.101 08:05:36 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:06.101 08:05:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:06.101 08:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:06.101 08:05:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:06.101 08:05:36 -- target/rpc.sh@110 -- # stats='{ 00:13:06.101 "tick_rate": 2400000000, 00:13:06.101 "poll_groups": [ 00:13:06.101 { 00:13:06.101 "name": "nvmf_tgt_poll_group_0", 00:13:06.101 "admin_qpairs": 0, 00:13:06.101 "io_qpairs": 224, 00:13:06.101 "current_admin_qpairs": 0, 00:13:06.101 "current_io_qpairs": 0, 00:13:06.101 "pending_bdev_io": 0, 00:13:06.101 "completed_nvme_io": 518, 00:13:06.101 "transports": [ 00:13:06.101 { 00:13:06.101 "trtype": "TCP" 00:13:06.101 } 00:13:06.101 ] 00:13:06.101 }, 00:13:06.101 { 00:13:06.101 "name": "nvmf_tgt_poll_group_1", 00:13:06.101 "admin_qpairs": 1, 00:13:06.101 "io_qpairs": 223, 00:13:06.101 "current_admin_qpairs": 0, 00:13:06.101 "current_io_qpairs": 0, 00:13:06.101 "pending_bdev_io": 0, 00:13:06.101 "completed_nvme_io": 223, 00:13:06.101 "transports": [ 00:13:06.101 { 00:13:06.101 "trtype": "TCP" 00:13:06.101 } 00:13:06.101 ] 00:13:06.101 }, 00:13:06.101 { 00:13:06.101 "name": "nvmf_tgt_poll_group_2", 00:13:06.101 "admin_qpairs": 6, 00:13:06.101 "io_qpairs": 218, 00:13:06.101 "current_admin_qpairs": 0, 00:13:06.101 "current_io_qpairs": 0, 00:13:06.101 "pending_bdev_io": 0, 00:13:06.101 "completed_nvme_io": 222, 00:13:06.101 "transports": [ 00:13:06.101 { 00:13:06.101 "trtype": "TCP" 00:13:06.101 } 00:13:06.101 ] 00:13:06.101 }, 00:13:06.101 { 00:13:06.101 "name": "nvmf_tgt_poll_group_3", 00:13:06.101 "admin_qpairs": 0, 00:13:06.101 "io_qpairs": 224, 00:13:06.101 "current_admin_qpairs": 0, 00:13:06.101 "current_io_qpairs": 0, 00:13:06.101 "pending_bdev_io": 0, 00:13:06.101 "completed_nvme_io": 276, 00:13:06.101 "transports": [ 00:13:06.101 { 00:13:06.101 "trtype": "TCP" 00:13:06.101 } 00:13:06.101 ] 00:13:06.101 } 00:13:06.101 ] 00:13:06.101 }' 00:13:06.101 08:05:36 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:06.101 08:05:36 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:06.101 08:05:36 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:06.101 08:05:36 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:06.101 08:05:36 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:06.101 08:05:36 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:06.101 08:05:36 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:06.101 08:05:36 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:06.101 08:05:36 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:06.362 08:05:36 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:06.362 08:05:36 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:06.362 08:05:36 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:06.362 08:05:36 -- target/rpc.sh@123 -- # nvmftestfini 00:13:06.362 08:05:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:06.362 08:05:36 -- nvmf/common.sh@116 -- # sync 00:13:06.362 08:05:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:06.362 08:05:36 -- nvmf/common.sh@119 -- # set +e 00:13:06.362 08:05:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:06.362 08:05:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:06.362 rmmod nvme_tcp 00:13:06.362 rmmod nvme_fabrics 00:13:06.362 rmmod nvme_keyring 00:13:06.362 08:05:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:06.362 08:05:36 -- nvmf/common.sh@123 -- # set -e 00:13:06.362 08:05:36 -- nvmf/common.sh@124 -- # return 0 00:13:06.362 08:05:36 -- nvmf/common.sh@477 -- # '[' -n 944557 ']' 00:13:06.362 08:05:36 -- nvmf/common.sh@478 -- # killprocess 944557 00:13:06.362 08:05:36 -- common/autotest_common.sh@926 -- # '[' -z 944557 ']' 00:13:06.362 08:05:36 -- common/autotest_common.sh@930 -- # kill -0 944557 00:13:06.362 08:05:36 -- common/autotest_common.sh@931 -- # uname 00:13:06.362 08:05:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:06.362 08:05:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 944557 00:13:06.362 08:05:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:06.362 08:05:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:06.362 08:05:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 944557' 00:13:06.362 killing process with pid 944557 00:13:06.362 08:05:36 -- common/autotest_common.sh@945 -- # kill 944557 00:13:06.362 08:05:36 -- common/autotest_common.sh@950 -- # wait 944557 00:13:06.622 08:05:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:06.622 08:05:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:06.622 08:05:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:06.622 08:05:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:06.622 08:05:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:06.622 08:05:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.622 08:05:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.622 08:05:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.534 08:05:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:08.534 00:13:08.534 real 0m36.958s 00:13:08.534 user 1m51.251s 00:13:08.534 sys 0m6.887s 00:13:08.534 08:05:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.534 08:05:39 -- common/autotest_common.sh@10 -- # set +x 00:13:08.534 ************************************ 00:13:08.534 END TEST nvmf_rpc 00:13:08.534 ************************************ 00:13:08.534 08:05:39 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:08.534 08:05:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:08.534 08:05:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:08.534 08:05:39 -- common/autotest_common.sh@10 -- # set +x 00:13:08.534 ************************************ 00:13:08.534 START TEST nvmf_invalid 00:13:08.534 ************************************ 00:13:08.534 08:05:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:08.795 * Looking for test storage... 00:13:08.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.795 08:05:39 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.795 08:05:39 -- nvmf/common.sh@7 -- # uname -s 00:13:08.795 08:05:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.795 08:05:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.795 08:05:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.795 08:05:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.795 08:05:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.795 08:05:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.795 08:05:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.795 08:05:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.795 08:05:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.795 08:05:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.795 08:05:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:08.795 08:05:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:08.795 08:05:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.795 08:05:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.795 08:05:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.795 08:05:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.795 08:05:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.795 08:05:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.795 08:05:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.795 08:05:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.795 08:05:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.795 08:05:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.795 08:05:39 -- paths/export.sh@5 -- # export PATH 00:13:08.795 08:05:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.795 08:05:39 -- nvmf/common.sh@46 -- # : 0 00:13:08.795 08:05:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:08.795 08:05:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:08.795 08:05:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:08.795 08:05:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.795 08:05:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.795 08:05:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:08.795 08:05:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:08.795 08:05:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:08.795 08:05:39 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:08.795 08:05:39 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:08.795 08:05:39 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:08.795 08:05:39 -- target/invalid.sh@14 -- # target=foobar 00:13:08.795 08:05:39 -- target/invalid.sh@16 -- # RANDOM=0 00:13:08.795 08:05:39 -- target/invalid.sh@34 -- # nvmftestinit 00:13:08.795 08:05:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:08.795 08:05:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.795 08:05:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:08.795 08:05:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:08.795 08:05:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:08.795 08:05:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.795 08:05:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.795 08:05:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.795 08:05:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:08.795 08:05:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:08.795 08:05:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:08.795 08:05:39 -- common/autotest_common.sh@10 -- # set +x 00:13:16.941 08:05:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:16.941 08:05:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:16.941 08:05:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:16.941 08:05:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:16.941 08:05:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:16.941 08:05:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:16.941 08:05:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:16.941 08:05:46 -- nvmf/common.sh@294 -- # net_devs=() 00:13:16.941 08:05:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:16.941 08:05:46 -- nvmf/common.sh@295 -- # e810=() 00:13:16.941 08:05:46 -- nvmf/common.sh@295 -- # local -ga e810 00:13:16.941 08:05:46 -- nvmf/common.sh@296 -- # x722=() 00:13:16.941 08:05:46 -- nvmf/common.sh@296 -- # local -ga x722 00:13:16.941 08:05:46 -- nvmf/common.sh@297 -- # mlx=() 00:13:16.941 08:05:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:16.941 08:05:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:16.941 08:05:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:16.941 08:05:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:16.941 08:05:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:16.941 08:05:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:16.941 08:05:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:16.941 08:05:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:16.941 08:05:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:16.941 08:05:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:16.941 08:05:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:16.941 08:05:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:16.941 08:05:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:16.941 08:05:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:16.941 08:05:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:16.941 08:05:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:16.941 08:05:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:16.941 08:05:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:16.941 08:05:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:16.941 08:05:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:16.941 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:16.941 08:05:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:16.941 08:05:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:16.941 08:05:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.941 08:05:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.941 08:05:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:16.941 08:05:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:16.941 08:05:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:16.941 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:16.941 08:05:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:16.941 08:05:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:16.941 08:05:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.941 08:05:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.941 08:05:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:16.941 08:05:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:16.941 08:05:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:16.941 08:05:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:16.941 08:05:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:16.941 08:05:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.941 08:05:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:16.941 08:05:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.941 08:05:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:16.941 Found net devices under 0000:31:00.0: cvl_0_0 00:13:16.941 08:05:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.941 08:05:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:16.941 08:05:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.941 08:05:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:16.941 08:05:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.941 08:05:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:16.941 Found net devices under 0000:31:00.1: cvl_0_1 00:13:16.941 08:05:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.941 08:05:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:16.941 08:05:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:16.941 08:05:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:16.941 08:05:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:16.941 08:05:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:16.941 08:05:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.941 08:05:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.941 08:05:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:16.941 08:05:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:16.941 08:05:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:16.941 08:05:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:16.941 08:05:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:16.941 08:05:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:16.941 08:05:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.941 08:05:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:16.941 08:05:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:16.941 08:05:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:16.941 08:05:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:16.941 08:05:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:16.941 08:05:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:16.941 08:05:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:16.941 08:05:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:16.941 08:05:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:16.941 08:05:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:16.941 08:05:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:16.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.761 ms 00:13:16.941 00:13:16.941 --- 10.0.0.2 ping statistics --- 00:13:16.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.941 rtt min/avg/max/mdev = 0.761/0.761/0.761/0.000 ms 00:13:16.941 08:05:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:16.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:13:16.941 00:13:16.941 --- 10.0.0.1 ping statistics --- 00:13:16.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.941 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:13:16.941 08:05:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.942 08:05:46 -- nvmf/common.sh@410 -- # return 0 00:13:16.942 08:05:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:16.942 08:05:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.942 08:05:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:16.942 08:05:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:16.942 08:05:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.942 08:05:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:16.942 08:05:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:16.942 08:05:46 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:16.942 08:05:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:16.942 08:05:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:16.942 08:05:46 -- common/autotest_common.sh@10 -- # set +x 00:13:16.942 08:05:46 -- nvmf/common.sh@469 -- # nvmfpid=954273 00:13:16.942 08:05:46 -- nvmf/common.sh@470 -- # waitforlisten 954273 00:13:16.942 08:05:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:16.942 08:05:46 -- common/autotest_common.sh@819 -- # '[' -z 954273 ']' 00:13:16.942 08:05:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.942 08:05:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:16.942 08:05:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.942 08:05:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:16.942 08:05:46 -- common/autotest_common.sh@10 -- # set +x 00:13:16.942 [2024-06-11 08:05:46.691493] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:16.942 [2024-06-11 08:05:46.691554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.942 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.942 [2024-06-11 08:05:46.762936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.942 [2024-06-11 08:05:46.836294] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:16.942 [2024-06-11 08:05:46.836428] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.942 [2024-06-11 08:05:46.836444] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.942 [2024-06-11 08:05:46.836452] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.942 [2024-06-11 08:05:46.836527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.942 [2024-06-11 08:05:46.836798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.942 [2024-06-11 08:05:46.836956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.942 [2024-06-11 08:05:46.836957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.942 08:05:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:16.942 08:05:47 -- common/autotest_common.sh@852 -- # return 0 00:13:16.942 08:05:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:16.942 08:05:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:16.942 08:05:47 -- common/autotest_common.sh@10 -- # set +x 00:13:16.942 08:05:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.942 08:05:47 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:16.942 08:05:47 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31137 00:13:17.202 [2024-06-11 08:05:47.646968] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:17.202 08:05:47 -- target/invalid.sh@40 -- # out='request: 00:13:17.202 { 00:13:17.202 "nqn": "nqn.2016-06.io.spdk:cnode31137", 00:13:17.202 "tgt_name": "foobar", 00:13:17.202 "method": "nvmf_create_subsystem", 00:13:17.202 "req_id": 1 00:13:17.202 } 00:13:17.202 Got JSON-RPC error response 00:13:17.202 response: 00:13:17.202 { 00:13:17.202 "code": -32603, 00:13:17.202 "message": "Unable to find target foobar" 00:13:17.202 }' 00:13:17.202 08:05:47 -- target/invalid.sh@41 -- # [[ request: 00:13:17.202 { 00:13:17.202 "nqn": "nqn.2016-06.io.spdk:cnode31137", 00:13:17.202 "tgt_name": "foobar", 00:13:17.202 "method": "nvmf_create_subsystem", 00:13:17.202 "req_id": 1 00:13:17.202 } 00:13:17.202 Got JSON-RPC error response 00:13:17.202 response: 00:13:17.202 { 00:13:17.202 "code": -32603, 00:13:17.202 "message": "Unable to find target foobar" 00:13:17.202 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:17.202 08:05:47 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:17.202 08:05:47 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode32243 00:13:17.202 [2024-06-11 08:05:47.819574] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32243: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:17.463 08:05:47 -- target/invalid.sh@45 -- # out='request: 00:13:17.463 { 00:13:17.463 "nqn": "nqn.2016-06.io.spdk:cnode32243", 00:13:17.463 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:17.463 "method": "nvmf_create_subsystem", 00:13:17.463 "req_id": 1 00:13:17.463 } 00:13:17.463 Got JSON-RPC error response 00:13:17.463 response: 00:13:17.463 { 00:13:17.463 "code": -32602, 00:13:17.463 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:17.463 }' 00:13:17.463 08:05:47 -- target/invalid.sh@46 -- # [[ request: 00:13:17.463 { 00:13:17.463 "nqn": "nqn.2016-06.io.spdk:cnode32243", 00:13:17.463 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:17.463 "method": "nvmf_create_subsystem", 00:13:17.463 "req_id": 1 00:13:17.463 } 00:13:17.463 Got JSON-RPC error response 00:13:17.463 response: 00:13:17.463 { 00:13:17.463 "code": -32602, 00:13:17.463 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:17.463 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:17.463 08:05:47 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:17.463 08:05:47 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30104 00:13:17.463 [2024-06-11 08:05:47.992164] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30104: invalid model number 'SPDK_Controller' 00:13:17.463 08:05:48 -- target/invalid.sh@50 -- # out='request: 00:13:17.463 { 00:13:17.463 "nqn": "nqn.2016-06.io.spdk:cnode30104", 00:13:17.463 "model_number": "SPDK_Controller\u001f", 00:13:17.463 "method": "nvmf_create_subsystem", 00:13:17.463 "req_id": 1 00:13:17.463 } 00:13:17.463 Got JSON-RPC error response 00:13:17.463 response: 00:13:17.463 { 00:13:17.463 "code": -32602, 00:13:17.463 "message": "Invalid MN SPDK_Controller\u001f" 00:13:17.463 }' 00:13:17.463 08:05:48 -- target/invalid.sh@51 -- # [[ request: 00:13:17.463 { 00:13:17.463 "nqn": "nqn.2016-06.io.spdk:cnode30104", 00:13:17.463 "model_number": "SPDK_Controller\u001f", 00:13:17.463 "method": "nvmf_create_subsystem", 00:13:17.463 "req_id": 1 00:13:17.463 } 00:13:17.463 Got JSON-RPC error response 00:13:17.463 response: 00:13:17.463 { 00:13:17.463 "code": -32602, 00:13:17.463 "message": "Invalid MN SPDK_Controller\u001f" 00:13:17.463 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:17.463 08:05:48 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:17.463 08:05:48 -- target/invalid.sh@19 -- # local length=21 ll 00:13:17.463 08:05:48 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:17.463 08:05:48 -- target/invalid.sh@21 -- # local chars 00:13:17.463 08:05:48 -- target/invalid.sh@22 -- # local string 00:13:17.463 08:05:48 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:17.463 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # printf %x 113 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # string+=q 00:13:17.463 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.463 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # printf %x 108 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # string+=l 00:13:17.463 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.463 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # printf %x 44 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # string+=, 00:13:17.463 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.463 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # printf %x 34 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # string+='"' 00:13:17.463 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.463 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # printf %x 44 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # string+=, 00:13:17.463 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.463 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # printf %x 57 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # string+=9 00:13:17.463 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.463 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # printf %x 92 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # string+='\' 00:13:17.463 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.463 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # printf %x 74 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # string+=J 00:13:17.463 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.463 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # printf %x 89 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # string+=Y 00:13:17.463 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.463 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # printf %x 92 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:17.463 08:05:48 -- target/invalid.sh@25 -- # string+='\' 00:13:17.463 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.464 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.464 08:05:48 -- target/invalid.sh@25 -- # printf %x 66 00:13:17.464 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:17.464 08:05:48 -- target/invalid.sh@25 -- # string+=B 00:13:17.464 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.464 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # printf %x 115 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # string+=s 00:13:17.724 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.724 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # printf %x 52 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # string+=4 00:13:17.724 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.724 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # printf %x 92 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # string+='\' 00:13:17.724 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.724 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # printf %x 80 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # string+=P 00:13:17.724 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.724 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # printf %x 75 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # string+=K 00:13:17.724 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.724 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # printf %x 55 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # string+=7 00:13:17.724 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.724 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # printf %x 112 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # string+=p 00:13:17.724 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.724 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # printf %x 68 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # string+=D 00:13:17.724 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.724 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # printf %x 37 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # string+=% 00:13:17.724 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.724 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # printf %x 94 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:17.724 08:05:48 -- target/invalid.sh@25 -- # string+='^' 00:13:17.724 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.724 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.724 08:05:48 -- target/invalid.sh@28 -- # [[ q == \- ]] 00:13:17.724 08:05:48 -- target/invalid.sh@31 -- # echo 'ql,",9\JY\Bs4\PK7pD%^' 00:13:17.724 08:05:48 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'ql,",9\JY\Bs4\PK7pD%^' nqn.2016-06.io.spdk:cnode4740 00:13:17.724 [2024-06-11 08:05:48.317177] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4740: invalid serial number 'ql,",9\JY\Bs4\PK7pD%^' 00:13:17.724 08:05:48 -- target/invalid.sh@54 -- # out='request: 00:13:17.724 { 00:13:17.724 "nqn": "nqn.2016-06.io.spdk:cnode4740", 00:13:17.724 "serial_number": "ql,\",9\\JY\\Bs4\\PK7pD%^", 00:13:17.724 "method": "nvmf_create_subsystem", 00:13:17.724 "req_id": 1 00:13:17.724 } 00:13:17.724 Got JSON-RPC error response 00:13:17.724 response: 00:13:17.724 { 00:13:17.724 "code": -32602, 00:13:17.724 "message": "Invalid SN ql,\",9\\JY\\Bs4\\PK7pD%^" 00:13:17.724 }' 00:13:17.724 08:05:48 -- target/invalid.sh@55 -- # [[ request: 00:13:17.724 { 00:13:17.724 "nqn": "nqn.2016-06.io.spdk:cnode4740", 00:13:17.724 "serial_number": "ql,\",9\\JY\\Bs4\\PK7pD%^", 00:13:17.724 "method": "nvmf_create_subsystem", 00:13:17.724 "req_id": 1 00:13:17.724 } 00:13:17.724 Got JSON-RPC error response 00:13:17.724 response: 00:13:17.724 { 00:13:17.724 "code": -32602, 00:13:17.724 "message": "Invalid SN ql,\",9\\JY\\Bs4\\PK7pD%^" 00:13:17.724 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:17.724 08:05:48 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:17.724 08:05:48 -- target/invalid.sh@19 -- # local length=41 ll 00:13:17.724 08:05:48 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:17.725 08:05:48 -- target/invalid.sh@21 -- # local chars 00:13:17.725 08:05:48 -- target/invalid.sh@22 -- # local string 00:13:17.725 08:05:48 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:17.725 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.725 08:05:48 -- target/invalid.sh@25 -- # printf %x 118 00:13:17.725 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:17.725 08:05:48 -- target/invalid.sh@25 -- # string+=v 00:13:17.725 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.725 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.725 08:05:48 -- target/invalid.sh@25 -- # printf %x 106 00:13:17.725 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:17.725 08:05:48 -- target/invalid.sh@25 -- # string+=j 00:13:17.725 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.725 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # printf %x 60 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # string+='<' 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # printf %x 106 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # string+=j 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # printf %x 56 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # string+=8 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # printf %x 34 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # string+='"' 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # printf %x 81 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # string+=Q 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # printf %x 33 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # string+='!' 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # printf %x 120 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # string+=x 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # printf %x 113 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # string+=q 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # printf %x 94 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # string+='^' 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # printf %x 122 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # string+=z 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # printf %x 80 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # string+=P 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # printf %x 108 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # string+=l 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # printf %x 124 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # string+='|' 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.986 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.986 08:05:48 -- target/invalid.sh@25 -- # printf %x 105 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # string+=i 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # printf %x 60 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # string+='<' 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # printf %x 53 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # string+=5 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # printf %x 45 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # string+=- 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # printf %x 123 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # string+='{' 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # printf %x 50 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # string+=2 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # printf %x 46 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # string+=. 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # printf %x 95 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # string+=_ 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # printf %x 113 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # string+=q 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # printf %x 66 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # string+=B 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # printf %x 87 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # string+=W 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # printf %x 51 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # string+=3 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # printf %x 35 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # string+='#' 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # printf %x 67 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # string+=C 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # printf %x 114 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # string+=r 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # printf %x 37 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # string+=% 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # printf %x 123 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # string+='{' 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # printf %x 98 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # string+=b 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # printf %x 91 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # string+='[' 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # printf %x 53 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # string+=5 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # printf %x 61 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # string+== 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # printf %x 52 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # string+=4 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:17.987 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # printf %x 58 00:13:17.987 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:18.249 08:05:48 -- target/invalid.sh@25 -- # string+=: 00:13:18.249 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.249 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.249 08:05:48 -- target/invalid.sh@25 -- # printf %x 53 00:13:18.249 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:18.249 08:05:48 -- target/invalid.sh@25 -- # string+=5 00:13:18.249 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.249 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.249 08:05:48 -- target/invalid.sh@25 -- # printf %x 94 00:13:18.249 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:18.249 08:05:48 -- target/invalid.sh@25 -- # string+='^' 00:13:18.249 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.249 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.249 08:05:48 -- target/invalid.sh@25 -- # printf %x 63 00:13:18.249 08:05:48 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:18.249 08:05:48 -- target/invalid.sh@25 -- # string+='?' 00:13:18.249 08:05:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:18.249 08:05:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:18.249 08:05:48 -- target/invalid.sh@28 -- # [[ v == \- ]] 00:13:18.249 08:05:48 -- target/invalid.sh@31 -- # echo 'vj /dev/null' 00:13:20.070 08:05:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.979 08:05:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:21.979 00:13:21.979 real 0m13.428s 00:13:21.979 user 0m18.823s 00:13:21.979 sys 0m6.396s 00:13:21.979 08:05:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:21.979 08:05:52 -- common/autotest_common.sh@10 -- # set +x 00:13:21.979 ************************************ 00:13:21.979 END TEST nvmf_invalid 00:13:21.979 ************************************ 00:13:21.979 08:05:52 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:21.979 08:05:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:21.979 08:05:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:21.979 08:05:52 -- common/autotest_common.sh@10 -- # set +x 00:13:22.239 ************************************ 00:13:22.239 START TEST nvmf_abort 00:13:22.239 ************************************ 00:13:22.239 08:05:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:22.239 * Looking for test storage... 00:13:22.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:22.239 08:05:52 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.239 08:05:52 -- nvmf/common.sh@7 -- # uname -s 00:13:22.239 08:05:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.239 08:05:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.239 08:05:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.239 08:05:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.239 08:05:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.239 08:05:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.239 08:05:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.239 08:05:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.239 08:05:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.239 08:05:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.239 08:05:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:22.239 08:05:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:22.239 08:05:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.239 08:05:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.239 08:05:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.239 08:05:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:22.239 08:05:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.239 08:05:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.239 08:05:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.239 08:05:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.239 08:05:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.240 08:05:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.240 08:05:52 -- paths/export.sh@5 -- # export PATH 00:13:22.240 08:05:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.240 08:05:52 -- nvmf/common.sh@46 -- # : 0 00:13:22.240 08:05:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:22.240 08:05:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:22.240 08:05:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:22.240 08:05:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.240 08:05:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.240 08:05:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:22.240 08:05:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:22.240 08:05:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:22.240 08:05:52 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:22.240 08:05:52 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:22.240 08:05:52 -- target/abort.sh@14 -- # nvmftestinit 00:13:22.240 08:05:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:22.240 08:05:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:22.240 08:05:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:22.240 08:05:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:22.240 08:05:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:22.240 08:05:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.240 08:05:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:22.240 08:05:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.240 08:05:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:22.240 08:05:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:22.240 08:05:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:22.240 08:05:52 -- common/autotest_common.sh@10 -- # set +x 00:13:30.376 08:05:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:30.376 08:05:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:30.376 08:05:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:30.376 08:05:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:30.376 08:05:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:30.376 08:05:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:30.376 08:05:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:30.376 08:05:59 -- nvmf/common.sh@294 -- # net_devs=() 00:13:30.376 08:05:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:30.376 08:05:59 -- nvmf/common.sh@295 -- # e810=() 00:13:30.376 08:05:59 -- nvmf/common.sh@295 -- # local -ga e810 00:13:30.376 08:05:59 -- nvmf/common.sh@296 -- # x722=() 00:13:30.376 08:05:59 -- nvmf/common.sh@296 -- # local -ga x722 00:13:30.376 08:05:59 -- nvmf/common.sh@297 -- # mlx=() 00:13:30.376 08:05:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:30.376 08:05:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.376 08:05:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.376 08:05:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.376 08:05:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.376 08:05:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.376 08:05:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.376 08:05:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.376 08:05:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.376 08:05:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.376 08:05:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.376 08:05:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.376 08:05:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:30.376 08:05:59 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:30.376 08:05:59 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:30.376 08:05:59 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:30.376 08:05:59 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:30.376 08:05:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:30.376 08:05:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:30.376 08:05:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:30.376 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:30.376 08:05:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:30.376 08:05:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:30.376 08:05:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.376 08:05:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.376 08:05:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:30.376 08:05:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:30.376 08:05:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:30.376 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:30.376 08:05:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:30.376 08:05:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:30.376 08:05:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.376 08:05:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.376 08:05:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:30.376 08:05:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:30.376 08:05:59 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:30.376 08:05:59 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:30.376 08:05:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:30.376 08:05:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.376 08:05:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:30.376 08:05:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.376 08:05:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:30.376 Found net devices under 0000:31:00.0: cvl_0_0 00:13:30.376 08:05:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.376 08:05:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:30.376 08:05:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.376 08:05:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:30.376 08:05:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.376 08:05:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:30.376 Found net devices under 0000:31:00.1: cvl_0_1 00:13:30.376 08:05:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.376 08:05:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:30.376 08:05:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:30.376 08:05:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:30.376 08:05:59 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:30.377 08:05:59 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:30.377 08:05:59 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.377 08:05:59 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.377 08:05:59 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.377 08:05:59 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:30.377 08:05:59 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.377 08:05:59 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.377 08:05:59 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:30.377 08:05:59 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.377 08:05:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.377 08:05:59 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:30.377 08:05:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:30.377 08:05:59 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.377 08:05:59 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.377 08:05:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.377 08:05:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.377 08:05:59 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:30.377 08:05:59 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.377 08:06:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.377 08:06:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.377 08:06:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:30.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:13:30.377 00:13:30.377 --- 10.0.0.2 ping statistics --- 00:13:30.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.377 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:13:30.377 08:06:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:13:30.377 00:13:30.377 --- 10.0.0.1 ping statistics --- 00:13:30.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.377 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:13:30.377 08:06:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.377 08:06:00 -- nvmf/common.sh@410 -- # return 0 00:13:30.377 08:06:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:30.377 08:06:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.377 08:06:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:30.377 08:06:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:30.377 08:06:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.377 08:06:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:30.377 08:06:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:30.377 08:06:00 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:30.377 08:06:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:30.377 08:06:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:30.377 08:06:00 -- common/autotest_common.sh@10 -- # set +x 00:13:30.377 08:06:00 -- nvmf/common.sh@469 -- # nvmfpid=959537 00:13:30.377 08:06:00 -- nvmf/common.sh@470 -- # waitforlisten 959537 00:13:30.377 08:06:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:30.377 08:06:00 -- common/autotest_common.sh@819 -- # '[' -z 959537 ']' 00:13:30.377 08:06:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.377 08:06:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:30.377 08:06:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.377 08:06:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:30.377 08:06:00 -- common/autotest_common.sh@10 -- # set +x 00:13:30.377 [2024-06-11 08:06:00.179331] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:30.377 [2024-06-11 08:06:00.179399] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.377 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.377 [2024-06-11 08:06:00.268717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:30.377 [2024-06-11 08:06:00.361708] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:30.377 [2024-06-11 08:06:00.361864] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.377 [2024-06-11 08:06:00.361874] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.377 [2024-06-11 08:06:00.361881] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.377 [2024-06-11 08:06:00.362029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.377 [2024-06-11 08:06:00.362194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.377 [2024-06-11 08:06:00.362194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.377 08:06:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:30.377 08:06:00 -- common/autotest_common.sh@852 -- # return 0 00:13:30.377 08:06:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:30.377 08:06:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:30.377 08:06:00 -- common/autotest_common.sh@10 -- # set +x 00:13:30.377 08:06:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.377 08:06:00 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:30.377 08:06:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.377 08:06:00 -- common/autotest_common.sh@10 -- # set +x 00:13:30.377 [2024-06-11 08:06:01.003799] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.377 08:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.377 08:06:01 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:30.377 08:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.377 08:06:01 -- common/autotest_common.sh@10 -- # set +x 00:13:30.638 Malloc0 00:13:30.638 08:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.638 08:06:01 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:30.638 08:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.638 08:06:01 -- common/autotest_common.sh@10 -- # set +x 00:13:30.638 Delay0 00:13:30.638 08:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.638 08:06:01 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:30.638 08:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.638 08:06:01 -- common/autotest_common.sh@10 -- # set +x 00:13:30.638 08:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.638 08:06:01 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:30.638 08:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.638 08:06:01 -- common/autotest_common.sh@10 -- # set +x 00:13:30.638 08:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.638 08:06:01 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:30.638 08:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.638 08:06:01 -- common/autotest_common.sh@10 -- # set +x 00:13:30.638 [2024-06-11 08:06:01.085888] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.638 08:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.638 08:06:01 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:30.638 08:06:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.638 08:06:01 -- common/autotest_common.sh@10 -- # set +x 00:13:30.638 08:06:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.638 08:06:01 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:30.638 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.638 [2024-06-11 08:06:01.153792] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:33.179 Initializing NVMe Controllers 00:13:33.179 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:33.179 controller IO queue size 128 less than required 00:13:33.179 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:33.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:33.179 Initialization complete. Launching workers. 00:13:33.179 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34713 00:13:33.179 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34774, failed to submit 62 00:13:33.179 success 34713, unsuccess 61, failed 0 00:13:33.179 08:06:03 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:33.179 08:06:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.179 08:06:03 -- common/autotest_common.sh@10 -- # set +x 00:13:33.179 08:06:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.179 08:06:03 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:33.179 08:06:03 -- target/abort.sh@38 -- # nvmftestfini 00:13:33.179 08:06:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:33.179 08:06:03 -- nvmf/common.sh@116 -- # sync 00:13:33.179 08:06:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:33.179 08:06:03 -- nvmf/common.sh@119 -- # set +e 00:13:33.179 08:06:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:33.179 08:06:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:33.179 rmmod nvme_tcp 00:13:33.179 rmmod nvme_fabrics 00:13:33.179 rmmod nvme_keyring 00:13:33.179 08:06:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:33.179 08:06:03 -- nvmf/common.sh@123 -- # set -e 00:13:33.179 08:06:03 -- nvmf/common.sh@124 -- # return 0 00:13:33.179 08:06:03 -- nvmf/common.sh@477 -- # '[' -n 959537 ']' 00:13:33.179 08:06:03 -- nvmf/common.sh@478 -- # killprocess 959537 00:13:33.179 08:06:03 -- common/autotest_common.sh@926 -- # '[' -z 959537 ']' 00:13:33.179 08:06:03 -- common/autotest_common.sh@930 -- # kill -0 959537 00:13:33.179 08:06:03 -- common/autotest_common.sh@931 -- # uname 00:13:33.179 08:06:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:33.179 08:06:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 959537 00:13:33.180 08:06:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:33.180 08:06:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:33.180 08:06:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 959537' 00:13:33.180 killing process with pid 959537 00:13:33.180 08:06:03 -- common/autotest_common.sh@945 -- # kill 959537 00:13:33.180 08:06:03 -- common/autotest_common.sh@950 -- # wait 959537 00:13:33.180 08:06:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:33.180 08:06:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:33.180 08:06:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:33.180 08:06:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:33.180 08:06:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:33.180 08:06:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.180 08:06:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.180 08:06:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.090 08:06:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:35.090 00:13:35.090 real 0m12.966s 00:13:35.090 user 0m13.461s 00:13:35.090 sys 0m6.211s 00:13:35.090 08:06:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:35.090 08:06:05 -- common/autotest_common.sh@10 -- # set +x 00:13:35.090 ************************************ 00:13:35.090 END TEST nvmf_abort 00:13:35.090 ************************************ 00:13:35.090 08:06:05 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:35.090 08:06:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:35.090 08:06:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:35.090 08:06:05 -- common/autotest_common.sh@10 -- # set +x 00:13:35.090 ************************************ 00:13:35.090 START TEST nvmf_ns_hotplug_stress 00:13:35.090 ************************************ 00:13:35.090 08:06:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:35.090 * Looking for test storage... 00:13:35.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.351 08:06:05 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.351 08:06:05 -- nvmf/common.sh@7 -- # uname -s 00:13:35.351 08:06:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.351 08:06:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.351 08:06:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.351 08:06:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.351 08:06:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.351 08:06:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.351 08:06:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.351 08:06:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.351 08:06:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.351 08:06:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.351 08:06:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:35.351 08:06:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:35.351 08:06:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.351 08:06:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.351 08:06:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.351 08:06:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.351 08:06:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.351 08:06:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.351 08:06:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.351 08:06:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.351 08:06:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.351 08:06:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.351 08:06:05 -- paths/export.sh@5 -- # export PATH 00:13:35.351 08:06:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.351 08:06:05 -- nvmf/common.sh@46 -- # : 0 00:13:35.351 08:06:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:35.351 08:06:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:35.351 08:06:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:35.351 08:06:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.351 08:06:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.351 08:06:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:35.352 08:06:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:35.352 08:06:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:35.352 08:06:05 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:35.352 08:06:05 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:35.352 08:06:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:35.352 08:06:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.352 08:06:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:35.352 08:06:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:35.352 08:06:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:35.352 08:06:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.352 08:06:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:35.352 08:06:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.352 08:06:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:35.352 08:06:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:35.352 08:06:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:35.352 08:06:05 -- common/autotest_common.sh@10 -- # set +x 00:13:41.935 08:06:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:41.935 08:06:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:41.935 08:06:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:41.935 08:06:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:41.935 08:06:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:41.935 08:06:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:41.935 08:06:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:41.935 08:06:12 -- nvmf/common.sh@294 -- # net_devs=() 00:13:41.935 08:06:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:41.935 08:06:12 -- nvmf/common.sh@295 -- # e810=() 00:13:41.935 08:06:12 -- nvmf/common.sh@295 -- # local -ga e810 00:13:41.935 08:06:12 -- nvmf/common.sh@296 -- # x722=() 00:13:41.935 08:06:12 -- nvmf/common.sh@296 -- # local -ga x722 00:13:41.935 08:06:12 -- nvmf/common.sh@297 -- # mlx=() 00:13:41.935 08:06:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:41.935 08:06:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:41.935 08:06:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:41.935 08:06:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:41.935 08:06:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:41.935 08:06:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:41.935 08:06:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:41.935 08:06:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:41.935 08:06:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:41.935 08:06:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:41.935 08:06:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:41.935 08:06:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:41.935 08:06:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:41.935 08:06:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:41.935 08:06:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:41.935 08:06:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:41.935 08:06:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:41.935 08:06:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:41.935 08:06:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:41.935 08:06:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:41.935 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:41.935 08:06:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:41.935 08:06:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:41.935 08:06:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.935 08:06:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.935 08:06:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:41.935 08:06:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:41.935 08:06:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:41.935 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:41.935 08:06:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:41.935 08:06:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:41.935 08:06:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.935 08:06:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.935 08:06:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:41.935 08:06:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:41.935 08:06:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:41.935 08:06:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:41.935 08:06:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:41.935 08:06:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.935 08:06:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:41.935 08:06:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.935 08:06:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:41.935 Found net devices under 0000:31:00.0: cvl_0_0 00:13:41.935 08:06:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.935 08:06:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:41.935 08:06:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.935 08:06:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:41.935 08:06:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.935 08:06:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:41.935 Found net devices under 0000:31:00.1: cvl_0_1 00:13:41.935 08:06:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.935 08:06:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:41.935 08:06:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:41.935 08:06:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:41.935 08:06:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:41.935 08:06:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:41.935 08:06:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.935 08:06:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:41.935 08:06:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:41.935 08:06:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:41.935 08:06:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:41.935 08:06:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:41.935 08:06:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:41.935 08:06:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:41.935 08:06:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.935 08:06:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:41.935 08:06:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:41.935 08:06:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:41.935 08:06:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:41.935 08:06:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:41.935 08:06:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:42.196 08:06:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:42.196 08:06:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:42.196 08:06:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:42.196 08:06:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:42.196 08:06:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:42.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:13:42.196 00:13:42.196 --- 10.0.0.2 ping statistics --- 00:13:42.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.196 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:13:42.196 08:06:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:42.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:13:42.196 00:13:42.196 --- 10.0.0.1 ping statistics --- 00:13:42.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.196 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:13:42.196 08:06:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.196 08:06:12 -- nvmf/common.sh@410 -- # return 0 00:13:42.196 08:06:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:42.196 08:06:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.196 08:06:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:42.196 08:06:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:42.196 08:06:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.196 08:06:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:42.196 08:06:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:42.196 08:06:12 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:42.196 08:06:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:42.196 08:06:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:42.196 08:06:12 -- common/autotest_common.sh@10 -- # set +x 00:13:42.196 08:06:12 -- nvmf/common.sh@469 -- # nvmfpid=964983 00:13:42.196 08:06:12 -- nvmf/common.sh@470 -- # waitforlisten 964983 00:13:42.196 08:06:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:42.196 08:06:12 -- common/autotest_common.sh@819 -- # '[' -z 964983 ']' 00:13:42.196 08:06:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.196 08:06:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:42.196 08:06:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.196 08:06:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:42.196 08:06:12 -- common/autotest_common.sh@10 -- # set +x 00:13:42.196 [2024-06-11 08:06:12.821668] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:42.196 [2024-06-11 08:06:12.821727] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.457 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.457 [2024-06-11 08:06:12.910604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:42.457 [2024-06-11 08:06:12.995093] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:42.457 [2024-06-11 08:06:12.995253] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.457 [2024-06-11 08:06:12.995265] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.457 [2024-06-11 08:06:12.995273] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.457 [2024-06-11 08:06:12.995419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.457 [2024-06-11 08:06:12.995620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:42.457 [2024-06-11 08:06:12.995735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.027 08:06:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:43.027 08:06:13 -- common/autotest_common.sh@852 -- # return 0 00:13:43.027 08:06:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:43.027 08:06:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:43.027 08:06:13 -- common/autotest_common.sh@10 -- # set +x 00:13:43.027 08:06:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.027 08:06:13 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:43.027 08:06:13 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:43.287 [2024-06-11 08:06:13.770212] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.287 08:06:13 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:43.548 08:06:13 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.548 [2024-06-11 08:06:14.091533] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.548 08:06:14 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:43.807 08:06:14 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:43.807 Malloc0 00:13:44.067 08:06:14 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:44.067 Delay0 00:13:44.067 08:06:14 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.327 08:06:14 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:44.328 NULL1 00:13:44.328 08:06:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:44.587 08:06:15 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=965550 00:13:44.587 08:06:15 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:44.587 08:06:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:44.588 08:06:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.588 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.970 Read completed with error (sct=0, sc=11) 00:13:45.970 08:06:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.970 08:06:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:45.970 08:06:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:45.970 true 00:13:45.970 08:06:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:45.970 08:06:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.911 08:06:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.171 08:06:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:47.171 08:06:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:47.171 true 00:13:47.171 08:06:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:47.171 08:06:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.431 08:06:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.431 08:06:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:47.431 08:06:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:47.691 true 00:13:47.691 08:06:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:47.691 08:06:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.951 08:06:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.951 08:06:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:47.951 08:06:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:48.212 true 00:13:48.212 08:06:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:48.212 08:06:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.472 08:06:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.472 08:06:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:48.472 08:06:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:48.732 true 00:13:48.732 08:06:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:48.732 08:06:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.732 08:06:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.992 08:06:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:48.992 08:06:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:49.252 true 00:13:49.252 08:06:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:49.252 08:06:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.252 08:06:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.512 08:06:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:49.512 08:06:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:49.512 true 00:13:49.772 08:06:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:49.772 08:06:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.772 08:06:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.033 08:06:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:50.033 08:06:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:50.033 true 00:13:50.033 08:06:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:50.033 08:06:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.973 08:06:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.232 08:06:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:51.232 08:06:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:51.232 true 00:13:51.232 08:06:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:51.232 08:06:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.493 08:06:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.752 08:06:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:51.752 08:06:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:51.752 true 00:13:51.752 08:06:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:51.752 08:06:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.011 08:06:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.011 08:06:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:52.011 08:06:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:52.271 true 00:13:52.271 08:06:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:52.271 08:06:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.531 08:06:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.531 08:06:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:52.531 08:06:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:52.791 true 00:13:52.791 08:06:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:52.792 08:06:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.051 08:06:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.051 08:06:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:53.051 08:06:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:53.311 true 00:13:53.311 08:06:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:53.312 08:06:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.312 08:06:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.572 08:06:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:53.572 08:06:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:53.832 true 00:13:53.832 08:06:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:53.832 08:06:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.832 08:06:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.092 08:06:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:54.092 08:06:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:54.352 true 00:13:54.352 08:06:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:54.352 08:06:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.291 08:06:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.291 08:06:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:55.291 08:06:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:55.550 true 00:13:55.550 08:06:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:55.550 08:06:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.550 08:06:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.811 08:06:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:55.811 08:06:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:55.811 true 00:13:55.811 08:06:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:55.811 08:06:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.071 08:06:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.331 08:06:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:56.331 08:06:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:56.331 true 00:13:56.331 08:06:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:56.332 08:06:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.592 08:06:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.851 08:06:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:56.851 08:06:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:56.851 true 00:13:56.851 08:06:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:56.851 08:06:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.112 08:06:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.112 08:06:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:57.112 08:06:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:57.372 true 00:13:57.372 08:06:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:57.372 08:06:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.632 08:06:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.632 08:06:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:57.632 08:06:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:57.892 true 00:13:57.892 08:06:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:57.892 08:06:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.153 08:06:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.153 08:06:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:58.153 08:06:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:58.413 true 00:13:58.413 08:06:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:58.413 08:06:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.353 08:06:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:59.353 08:06:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:59.353 08:06:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:59.613 true 00:13:59.613 08:06:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:13:59.613 08:06:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.613 08:06:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.873 08:06:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:59.873 08:06:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:00.133 true 00:14:00.133 08:06:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:00.133 08:06:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.133 08:06:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.393 08:06:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:00.393 08:06:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:00.393 true 00:14:00.393 08:06:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:00.393 08:06:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.654 08:06:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.915 08:06:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:00.915 08:06:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:00.915 true 00:14:00.915 08:06:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:00.915 08:06:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.175 08:06:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.436 08:06:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:01.436 08:06:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:01.436 true 00:14:01.437 08:06:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:01.437 08:06:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.375 08:06:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.636 08:06:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:02.636 08:06:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:02.636 true 00:14:02.636 08:06:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:02.636 08:06:33 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.896 08:06:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.157 08:06:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:03.157 08:06:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:03.157 true 00:14:03.157 08:06:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:03.157 08:06:33 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.417 08:06:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.677 08:06:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:14:03.677 08:06:34 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:03.677 true 00:14:03.677 08:06:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:03.677 08:06:34 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.937 08:06:34 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.937 08:06:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:14:03.937 08:06:34 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:14:04.198 true 00:14:04.198 08:06:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:04.198 08:06:34 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.458 08:06:34 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.458 08:06:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:14:04.458 08:06:35 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:04.717 true 00:14:04.717 08:06:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:04.717 08:06:35 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.976 08:06:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.976 08:06:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:14:04.976 08:06:35 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:14:05.236 true 00:14:05.236 08:06:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:05.236 08:06:35 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.496 08:06:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.496 08:06:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:14:05.496 08:06:36 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:14:05.756 true 00:14:05.756 08:06:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:05.756 08:06:36 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.756 08:06:36 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.017 08:06:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:14:06.017 08:06:36 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:14:06.277 true 00:14:06.277 08:06:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:06.277 08:06:36 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.277 08:06:36 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.537 08:06:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:14:06.537 08:06:37 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:14:06.537 true 00:14:06.537 08:06:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:06.537 08:06:37 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.796 08:06:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.067 08:06:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:14:07.067 08:06:37 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:14:07.067 true 00:14:07.067 08:06:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:07.067 08:06:37 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.335 08:06:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.595 08:06:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:14:07.595 08:06:37 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:14:07.595 true 00:14:07.595 08:06:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:07.595 08:06:38 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.537 08:06:39 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.796 08:06:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:14:08.796 08:06:39 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:14:08.796 true 00:14:08.796 08:06:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:08.796 08:06:39 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.056 08:06:39 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.056 08:06:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:14:09.056 08:06:39 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:14:09.316 true 00:14:09.316 08:06:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:09.316 08:06:39 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.575 08:06:39 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.575 08:06:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:14:09.575 08:06:40 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:14:09.835 true 00:14:09.835 08:06:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:09.835 08:06:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.835 08:06:40 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.095 08:06:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:14:10.095 08:06:40 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:14:10.355 true 00:14:10.355 08:06:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:10.355 08:06:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.355 08:06:40 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.616 08:06:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:14:10.616 08:06:41 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:14:10.877 true 00:14:10.877 08:06:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:10.877 08:06:41 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.877 08:06:41 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.138 08:06:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:14:11.138 08:06:41 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:14:11.138 true 00:14:11.138 08:06:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:11.138 08:06:41 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.398 08:06:41 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.660 08:06:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:14:11.660 08:06:42 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:14:11.660 true 00:14:11.660 08:06:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:11.660 08:06:42 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.919 08:06:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.178 08:06:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:14:12.178 08:06:42 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:14:12.178 true 00:14:12.178 08:06:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:12.178 08:06:42 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.438 08:06:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.438 08:06:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:14:12.438 08:06:43 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:14:12.698 true 00:14:12.698 08:06:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:12.698 08:06:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.958 08:06:43 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.958 08:06:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:14:12.958 08:06:43 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:14:13.217 true 00:14:13.218 08:06:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:13.218 08:06:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.477 08:06:43 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.477 08:06:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:14:13.477 08:06:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:14:13.737 true 00:14:13.737 08:06:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:13.737 08:06:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.678 08:06:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.678 08:06:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:14:14.678 08:06:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:14:14.938 true 00:14:14.938 08:06:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:14.938 08:06:45 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.879 Initializing NVMe Controllers 00:14:15.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:15.879 Controller IO queue size 128, less than required. 00:14:15.879 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:15.879 Controller IO queue size 128, less than required. 00:14:15.879 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:15.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:15.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:15.879 Initialization complete. Launching workers. 00:14:15.879 ======================================================== 00:14:15.879 Latency(us) 00:14:15.879 Device Information : IOPS MiB/s Average min max 00:14:15.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 346.41 0.17 119632.94 1781.24 1187871.55 00:14:15.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9247.42 4.52 13841.23 1491.45 428869.75 00:14:15.879 ======================================================== 00:14:15.879 Total : 9593.83 4.68 17661.07 1491.45 1187871.55 00:14:15.879 00:14:15.879 08:06:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.879 08:06:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:14:15.879 08:06:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:14:16.140 true 00:14:16.140 08:06:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 965550 00:14:16.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (965550) - No such process 00:14:16.140 08:06:46 -- target/ns_hotplug_stress.sh@53 -- # wait 965550 00:14:16.140 08:06:46 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.140 08:06:46 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:16.401 08:06:46 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:16.401 08:06:46 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:16.401 08:06:46 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:16.401 08:06:46 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:16.401 08:06:46 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:16.401 null0 00:14:16.401 08:06:47 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:16.401 08:06:47 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:16.401 08:06:47 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:16.662 null1 00:14:16.662 08:06:47 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:16.662 08:06:47 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:16.662 08:06:47 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:16.923 null2 00:14:16.923 08:06:47 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:16.923 08:06:47 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:16.923 08:06:47 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:16.923 null3 00:14:16.923 08:06:47 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:16.923 08:06:47 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:16.923 08:06:47 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:17.184 null4 00:14:17.184 08:06:47 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:17.184 08:06:47 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:17.184 08:06:47 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:17.184 null5 00:14:17.445 08:06:47 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:17.445 08:06:47 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:17.445 08:06:47 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:17.445 null6 00:14:17.445 08:06:48 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:17.445 08:06:48 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:17.445 08:06:48 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:17.707 null7 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:17.707 08:06:48 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:17.708 08:06:48 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:17.708 08:06:48 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:17.708 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:17.708 08:06:48 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:17.708 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.708 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:17.708 08:06:48 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:17.708 08:06:48 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:17.708 08:06:48 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:17.708 08:06:48 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:17.708 08:06:48 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:17.708 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:17.708 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.708 08:06:48 -- target/ns_hotplug_stress.sh@66 -- # wait 972226 972228 972231 972234 972237 972240 972242 972244 00:14:17.708 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:17.708 08:06:48 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:17.708 08:06:48 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:17.708 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:17.708 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.708 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:17.970 08:06:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.970 08:06:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:17.970 08:06:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:17.970 08:06:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:17.970 08:06:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:17.970 08:06:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:17.970 08:06:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:17.970 08:06:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:17.970 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:17.970 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.971 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:17.971 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:17.971 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.971 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:17.971 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:17.971 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.971 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:17.971 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:17.971 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.971 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:17.971 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:17.971 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.971 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:17.971 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:17.971 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.971 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:17.971 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:17.971 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:17.971 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.233 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:18.495 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.495 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.495 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:18.495 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.495 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.495 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.495 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:18.495 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.495 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:18.495 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.495 08:06:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.495 08:06:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:18.495 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.495 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:18.495 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:18.495 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:18.495 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:18.495 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:18.495 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:18.495 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:18.757 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:19.019 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.280 08:06:49 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:19.540 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:19.540 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:19.540 08:06:49 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:19.540 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:19.540 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:19.540 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:19.540 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.540 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:19.540 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.540 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.540 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:19.540 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.540 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.540 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:19.540 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.540 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.540 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:19.801 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:20.062 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.062 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.062 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:20.062 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.062 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.062 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:20.062 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.062 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.062 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:20.062 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.062 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.062 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:20.062 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.062 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.062 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:20.062 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.062 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.062 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:20.062 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:20.063 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:20.063 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:20.063 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.063 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:20.063 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:20.063 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:20.323 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:20.323 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.323 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.323 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:20.323 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.323 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.323 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:20.323 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.323 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.323 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:20.323 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.323 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.323 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:20.323 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:20.323 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.323 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.323 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:20.323 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.323 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.324 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:20.324 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.324 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.324 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:20.324 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.324 08:06:50 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.324 08:06:50 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:20.324 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:20.584 08:06:50 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.584 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.584 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.584 08:06:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:20.584 08:06:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:20.584 08:06:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:20.584 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.584 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.584 08:06:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:20.584 08:06:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:20.584 08:06:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:20.584 08:06:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:20.584 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.584 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.584 08:06:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:20.584 08:06:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:20.584 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.584 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.584 08:06:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.845 08:06:51 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:21.105 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.105 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.105 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.105 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.105 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.105 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.105 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.105 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.105 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.105 08:06:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.105 08:06:51 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:21.105 08:06:51 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:21.105 08:06:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:21.105 08:06:51 -- nvmf/common.sh@116 -- # sync 00:14:21.105 08:06:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:21.105 08:06:51 -- nvmf/common.sh@119 -- # set +e 00:14:21.105 08:06:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:21.105 08:06:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:21.105 rmmod nvme_tcp 00:14:21.105 rmmod nvme_fabrics 00:14:21.105 rmmod nvme_keyring 00:14:21.105 08:06:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:21.105 08:06:51 -- nvmf/common.sh@123 -- # set -e 00:14:21.105 08:06:51 -- nvmf/common.sh@124 -- # return 0 00:14:21.105 08:06:51 -- nvmf/common.sh@477 -- # '[' -n 964983 ']' 00:14:21.105 08:06:51 -- nvmf/common.sh@478 -- # killprocess 964983 00:14:21.105 08:06:51 -- common/autotest_common.sh@926 -- # '[' -z 964983 ']' 00:14:21.105 08:06:51 -- common/autotest_common.sh@930 -- # kill -0 964983 00:14:21.106 08:06:51 -- common/autotest_common.sh@931 -- # uname 00:14:21.106 08:06:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:21.106 08:06:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 964983 00:14:21.366 08:06:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:21.366 08:06:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:21.366 08:06:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 964983' 00:14:21.366 killing process with pid 964983 00:14:21.366 08:06:51 -- common/autotest_common.sh@945 -- # kill 964983 00:14:21.366 08:06:51 -- common/autotest_common.sh@950 -- # wait 964983 00:14:21.366 08:06:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:21.366 08:06:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:21.366 08:06:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:21.366 08:06:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:21.366 08:06:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:21.366 08:06:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.366 08:06:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.366 08:06:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.908 08:06:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:23.908 00:14:23.908 real 0m48.303s 00:14:23.908 user 3m12.587s 00:14:23.908 sys 0m15.238s 00:14:23.908 08:06:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:23.908 08:06:53 -- common/autotest_common.sh@10 -- # set +x 00:14:23.908 ************************************ 00:14:23.908 END TEST nvmf_ns_hotplug_stress 00:14:23.908 ************************************ 00:14:23.908 08:06:53 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:23.908 08:06:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:23.908 08:06:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:23.908 08:06:53 -- common/autotest_common.sh@10 -- # set +x 00:14:23.908 ************************************ 00:14:23.908 START TEST nvmf_connect_stress 00:14:23.908 ************************************ 00:14:23.908 08:06:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:23.908 * Looking for test storage... 00:14:23.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.908 08:06:54 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.908 08:06:54 -- nvmf/common.sh@7 -- # uname -s 00:14:23.908 08:06:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.908 08:06:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.908 08:06:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.908 08:06:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.908 08:06:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.908 08:06:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.908 08:06:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.909 08:06:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.909 08:06:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.909 08:06:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.909 08:06:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:23.909 08:06:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:23.909 08:06:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.909 08:06:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.909 08:06:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.909 08:06:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.909 08:06:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.909 08:06:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.909 08:06:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.909 08:06:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.909 08:06:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.909 08:06:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.909 08:06:54 -- paths/export.sh@5 -- # export PATH 00:14:23.909 08:06:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.909 08:06:54 -- nvmf/common.sh@46 -- # : 0 00:14:23.909 08:06:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:23.909 08:06:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:23.909 08:06:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:23.909 08:06:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.909 08:06:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.909 08:06:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:23.909 08:06:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:23.909 08:06:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:23.909 08:06:54 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:23.909 08:06:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:23.909 08:06:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.909 08:06:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:23.909 08:06:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:23.909 08:06:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:23.909 08:06:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.909 08:06:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.909 08:06:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.909 08:06:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:23.909 08:06:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:23.909 08:06:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:23.909 08:06:54 -- common/autotest_common.sh@10 -- # set +x 00:14:30.494 08:07:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:30.494 08:07:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:30.494 08:07:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:30.494 08:07:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:30.494 08:07:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:30.494 08:07:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:30.494 08:07:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:30.494 08:07:00 -- nvmf/common.sh@294 -- # net_devs=() 00:14:30.494 08:07:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:30.494 08:07:00 -- nvmf/common.sh@295 -- # e810=() 00:14:30.494 08:07:00 -- nvmf/common.sh@295 -- # local -ga e810 00:14:30.494 08:07:00 -- nvmf/common.sh@296 -- # x722=() 00:14:30.494 08:07:00 -- nvmf/common.sh@296 -- # local -ga x722 00:14:30.494 08:07:00 -- nvmf/common.sh@297 -- # mlx=() 00:14:30.494 08:07:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:30.494 08:07:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.494 08:07:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.494 08:07:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.494 08:07:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.494 08:07:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.494 08:07:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.494 08:07:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.494 08:07:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.494 08:07:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.494 08:07:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.494 08:07:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.494 08:07:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:30.494 08:07:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:30.494 08:07:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:30.494 08:07:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:30.494 08:07:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:30.494 08:07:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:30.494 08:07:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:30.494 08:07:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:30.494 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:30.494 08:07:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:30.494 08:07:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:30.494 08:07:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.494 08:07:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.494 08:07:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:30.494 08:07:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:30.494 08:07:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:30.494 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:30.494 08:07:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:30.494 08:07:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:30.494 08:07:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.494 08:07:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.494 08:07:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:30.494 08:07:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:30.494 08:07:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:30.494 08:07:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:30.494 08:07:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:30.494 08:07:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.494 08:07:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:30.494 08:07:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.494 08:07:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:30.494 Found net devices under 0000:31:00.0: cvl_0_0 00:14:30.494 08:07:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.494 08:07:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:30.494 08:07:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.494 08:07:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:30.494 08:07:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.494 08:07:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:30.494 Found net devices under 0000:31:00.1: cvl_0_1 00:14:30.494 08:07:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.494 08:07:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:30.494 08:07:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:30.494 08:07:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:30.494 08:07:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:30.494 08:07:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:30.494 08:07:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.494 08:07:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.494 08:07:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:30.494 08:07:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:30.494 08:07:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:30.494 08:07:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:30.494 08:07:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:30.494 08:07:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:30.494 08:07:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.494 08:07:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:30.494 08:07:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:30.494 08:07:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:30.494 08:07:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:30.755 08:07:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:30.755 08:07:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:30.755 08:07:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:30.755 08:07:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:30.755 08:07:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:30.755 08:07:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:30.755 08:07:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:30.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.713 ms 00:14:30.755 00:14:30.755 --- 10.0.0.2 ping statistics --- 00:14:30.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.755 rtt min/avg/max/mdev = 0.713/0.713/0.713/0.000 ms 00:14:30.755 08:07:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:30.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:14:30.755 00:14:30.755 --- 10.0.0.1 ping statistics --- 00:14:30.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.755 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:14:30.755 08:07:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.755 08:07:01 -- nvmf/common.sh@410 -- # return 0 00:14:30.755 08:07:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:30.755 08:07:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.755 08:07:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:30.755 08:07:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:30.755 08:07:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.755 08:07:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:30.755 08:07:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:30.755 08:07:01 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:30.755 08:07:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:30.755 08:07:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:30.755 08:07:01 -- common/autotest_common.sh@10 -- # set +x 00:14:30.755 08:07:01 -- nvmf/common.sh@469 -- # nvmfpid=977377 00:14:30.755 08:07:01 -- nvmf/common.sh@470 -- # waitforlisten 977377 00:14:30.755 08:07:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:30.755 08:07:01 -- common/autotest_common.sh@819 -- # '[' -z 977377 ']' 00:14:30.755 08:07:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.755 08:07:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:30.755 08:07:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.755 08:07:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:30.755 08:07:01 -- common/autotest_common.sh@10 -- # set +x 00:14:30.755 [2024-06-11 08:07:01.364301] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:30.755 [2024-06-11 08:07:01.364360] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.755 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.016 [2024-06-11 08:07:01.452273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:31.016 [2024-06-11 08:07:01.541858] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:31.016 [2024-06-11 08:07:01.542027] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.016 [2024-06-11 08:07:01.542038] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.016 [2024-06-11 08:07:01.542047] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.016 [2024-06-11 08:07:01.542198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.016 [2024-06-11 08:07:01.542365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.016 [2024-06-11 08:07:01.542365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.587 08:07:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:31.587 08:07:02 -- common/autotest_common.sh@852 -- # return 0 00:14:31.587 08:07:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:31.587 08:07:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:31.587 08:07:02 -- common/autotest_common.sh@10 -- # set +x 00:14:31.587 08:07:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.587 08:07:02 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:31.587 08:07:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.587 08:07:02 -- common/autotest_common.sh@10 -- # set +x 00:14:31.587 [2024-06-11 08:07:02.173731] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.587 08:07:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.587 08:07:02 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:31.587 08:07:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.587 08:07:02 -- common/autotest_common.sh@10 -- # set +x 00:14:31.587 08:07:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.587 08:07:02 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:31.587 08:07:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.587 08:07:02 -- common/autotest_common.sh@10 -- # set +x 00:14:31.587 [2024-06-11 08:07:02.209566] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.587 08:07:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.587 08:07:02 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:31.587 08:07:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.588 08:07:02 -- common/autotest_common.sh@10 -- # set +x 00:14:31.588 NULL1 00:14:31.588 08:07:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.588 08:07:02 -- target/connect_stress.sh@21 -- # PERF_PID=977668 00:14:31.588 08:07:02 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:31.588 08:07:02 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:31.588 08:07:02 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:31.848 08:07:02 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:31.848 08:07:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.848 08:07:02 -- target/connect_stress.sh@28 -- # cat 00:14:31.848 08:07:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.848 08:07:02 -- target/connect_stress.sh@28 -- # cat 00:14:31.848 08:07:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.848 08:07:02 -- target/connect_stress.sh@28 -- # cat 00:14:31.848 08:07:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.848 08:07:02 -- target/connect_stress.sh@28 -- # cat 00:14:31.848 08:07:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.848 08:07:02 -- target/connect_stress.sh@28 -- # cat 00:14:31.848 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.848 08:07:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.848 08:07:02 -- target/connect_stress.sh@28 -- # cat 00:14:31.848 08:07:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.848 08:07:02 -- target/connect_stress.sh@28 -- # cat 00:14:31.848 08:07:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.848 08:07:02 -- target/connect_stress.sh@28 -- # cat 00:14:31.848 08:07:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.848 08:07:02 -- target/connect_stress.sh@28 -- # cat 00:14:31.848 08:07:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.848 08:07:02 -- target/connect_stress.sh@28 -- # cat 00:14:31.848 08:07:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.849 08:07:02 -- target/connect_stress.sh@28 -- # cat 00:14:31.849 08:07:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.849 08:07:02 -- target/connect_stress.sh@28 -- # cat 00:14:31.849 08:07:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.849 08:07:02 -- target/connect_stress.sh@28 -- # cat 00:14:31.849 08:07:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.849 08:07:02 -- target/connect_stress.sh@28 -- # cat 00:14:31.849 08:07:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.849 08:07:02 -- target/connect_stress.sh@28 -- # cat 00:14:31.849 08:07:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.849 08:07:02 -- target/connect_stress.sh@28 -- # cat 00:14:31.849 08:07:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.849 08:07:02 -- target/connect_stress.sh@28 -- # cat 00:14:31.849 08:07:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.849 08:07:02 -- target/connect_stress.sh@28 -- # cat 00:14:31.849 08:07:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.849 08:07:02 -- target/connect_stress.sh@28 -- # cat 00:14:31.849 08:07:02 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.849 08:07:02 -- target/connect_stress.sh@28 -- # cat 00:14:31.849 08:07:02 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:31.849 08:07:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.849 08:07:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.849 08:07:02 -- common/autotest_common.sh@10 -- # set +x 00:14:32.109 08:07:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.109 08:07:02 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:32.109 08:07:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.109 08:07:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:32.109 08:07:02 -- common/autotest_common.sh@10 -- # set +x 00:14:32.370 08:07:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.370 08:07:02 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:32.370 08:07:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.370 08:07:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:32.370 08:07:02 -- common/autotest_common.sh@10 -- # set +x 00:14:32.941 08:07:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.941 08:07:03 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:32.941 08:07:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.941 08:07:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:32.941 08:07:03 -- common/autotest_common.sh@10 -- # set +x 00:14:33.202 08:07:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.202 08:07:03 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:33.202 08:07:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.202 08:07:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.202 08:07:03 -- common/autotest_common.sh@10 -- # set +x 00:14:33.463 08:07:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.463 08:07:03 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:33.463 08:07:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.463 08:07:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.463 08:07:03 -- common/autotest_common.sh@10 -- # set +x 00:14:33.724 08:07:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.724 08:07:04 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:33.724 08:07:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.724 08:07:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.724 08:07:04 -- common/autotest_common.sh@10 -- # set +x 00:14:33.984 08:07:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.984 08:07:04 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:33.984 08:07:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.984 08:07:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.984 08:07:04 -- common/autotest_common.sh@10 -- # set +x 00:14:34.556 08:07:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.556 08:07:04 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:34.556 08:07:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.556 08:07:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.556 08:07:04 -- common/autotest_common.sh@10 -- # set +x 00:14:34.817 08:07:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.818 08:07:05 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:34.818 08:07:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.818 08:07:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.818 08:07:05 -- common/autotest_common.sh@10 -- # set +x 00:14:35.077 08:07:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:35.077 08:07:05 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:35.077 08:07:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.077 08:07:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:35.077 08:07:05 -- common/autotest_common.sh@10 -- # set +x 00:14:35.337 08:07:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:35.337 08:07:05 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:35.337 08:07:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.337 08:07:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:35.337 08:07:05 -- common/autotest_common.sh@10 -- # set +x 00:14:35.598 08:07:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:35.598 08:07:06 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:35.598 08:07:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.598 08:07:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:35.598 08:07:06 -- common/autotest_common.sh@10 -- # set +x 00:14:36.180 08:07:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.180 08:07:06 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:36.180 08:07:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.180 08:07:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.180 08:07:06 -- common/autotest_common.sh@10 -- # set +x 00:14:36.468 08:07:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.468 08:07:06 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:36.468 08:07:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.468 08:07:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.468 08:07:06 -- common/autotest_common.sh@10 -- # set +x 00:14:36.763 08:07:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.763 08:07:07 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:36.764 08:07:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.764 08:07:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.764 08:07:07 -- common/autotest_common.sh@10 -- # set +x 00:14:37.050 08:07:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.050 08:07:07 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:37.050 08:07:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.050 08:07:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.050 08:07:07 -- common/autotest_common.sh@10 -- # set +x 00:14:37.370 08:07:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.370 08:07:07 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:37.370 08:07:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.370 08:07:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.370 08:07:07 -- common/autotest_common.sh@10 -- # set +x 00:14:37.632 08:07:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.632 08:07:08 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:37.632 08:07:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.632 08:07:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.632 08:07:08 -- common/autotest_common.sh@10 -- # set +x 00:14:37.893 08:07:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.893 08:07:08 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:37.893 08:07:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.893 08:07:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.893 08:07:08 -- common/autotest_common.sh@10 -- # set +x 00:14:38.464 08:07:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.464 08:07:08 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:38.464 08:07:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.464 08:07:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.464 08:07:08 -- common/autotest_common.sh@10 -- # set +x 00:14:38.725 08:07:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.725 08:07:09 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:38.725 08:07:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.725 08:07:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.725 08:07:09 -- common/autotest_common.sh@10 -- # set +x 00:14:38.987 08:07:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.987 08:07:09 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:38.987 08:07:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.987 08:07:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.987 08:07:09 -- common/autotest_common.sh@10 -- # set +x 00:14:39.247 08:07:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.247 08:07:09 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:39.247 08:07:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.247 08:07:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.247 08:07:09 -- common/autotest_common.sh@10 -- # set +x 00:14:39.508 08:07:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.508 08:07:10 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:39.508 08:07:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.508 08:07:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.508 08:07:10 -- common/autotest_common.sh@10 -- # set +x 00:14:40.081 08:07:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.081 08:07:10 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:40.081 08:07:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.081 08:07:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.081 08:07:10 -- common/autotest_common.sh@10 -- # set +x 00:14:40.356 08:07:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.356 08:07:10 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:40.356 08:07:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.356 08:07:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.356 08:07:10 -- common/autotest_common.sh@10 -- # set +x 00:14:40.617 08:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.617 08:07:11 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:40.617 08:07:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.617 08:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.617 08:07:11 -- common/autotest_common.sh@10 -- # set +x 00:14:40.877 08:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.877 08:07:11 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:40.877 08:07:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.877 08:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.877 08:07:11 -- common/autotest_common.sh@10 -- # set +x 00:14:41.137 08:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.137 08:07:11 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:41.137 08:07:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.137 08:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.137 08:07:11 -- common/autotest_common.sh@10 -- # set +x 00:14:41.708 08:07:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.708 08:07:12 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:41.708 08:07:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.708 08:07:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.708 08:07:12 -- common/autotest_common.sh@10 -- # set +x 00:14:41.708 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:41.968 08:07:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.968 08:07:12 -- target/connect_stress.sh@34 -- # kill -0 977668 00:14:41.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (977668) - No such process 00:14:41.968 08:07:12 -- target/connect_stress.sh@38 -- # wait 977668 00:14:41.968 08:07:12 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:41.968 08:07:12 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:41.968 08:07:12 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:41.968 08:07:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:41.968 08:07:12 -- nvmf/common.sh@116 -- # sync 00:14:41.968 08:07:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:41.968 08:07:12 -- nvmf/common.sh@119 -- # set +e 00:14:41.968 08:07:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:41.969 08:07:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:41.969 rmmod nvme_tcp 00:14:41.969 rmmod nvme_fabrics 00:14:41.969 rmmod nvme_keyring 00:14:41.969 08:07:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:41.969 08:07:12 -- nvmf/common.sh@123 -- # set -e 00:14:41.969 08:07:12 -- nvmf/common.sh@124 -- # return 0 00:14:41.969 08:07:12 -- nvmf/common.sh@477 -- # '[' -n 977377 ']' 00:14:41.969 08:07:12 -- nvmf/common.sh@478 -- # killprocess 977377 00:14:41.969 08:07:12 -- common/autotest_common.sh@926 -- # '[' -z 977377 ']' 00:14:41.969 08:07:12 -- common/autotest_common.sh@930 -- # kill -0 977377 00:14:41.969 08:07:12 -- common/autotest_common.sh@931 -- # uname 00:14:41.969 08:07:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:41.969 08:07:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 977377 00:14:41.969 08:07:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:41.969 08:07:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:41.969 08:07:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 977377' 00:14:41.969 killing process with pid 977377 00:14:41.969 08:07:12 -- common/autotest_common.sh@945 -- # kill 977377 00:14:41.969 08:07:12 -- common/autotest_common.sh@950 -- # wait 977377 00:14:42.229 08:07:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:42.229 08:07:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:42.229 08:07:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:42.229 08:07:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:42.229 08:07:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:42.229 08:07:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.229 08:07:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.229 08:07:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.143 08:07:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:44.143 00:14:44.143 real 0m20.740s 00:14:44.143 user 0m42.186s 00:14:44.143 sys 0m8.480s 00:14:44.143 08:07:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.143 08:07:14 -- common/autotest_common.sh@10 -- # set +x 00:14:44.143 ************************************ 00:14:44.143 END TEST nvmf_connect_stress 00:14:44.143 ************************************ 00:14:44.143 08:07:14 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:44.143 08:07:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:44.143 08:07:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:44.143 08:07:14 -- common/autotest_common.sh@10 -- # set +x 00:14:44.143 ************************************ 00:14:44.143 START TEST nvmf_fused_ordering 00:14:44.143 ************************************ 00:14:44.143 08:07:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:44.404 * Looking for test storage... 00:14:44.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:44.404 08:07:14 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:44.404 08:07:14 -- nvmf/common.sh@7 -- # uname -s 00:14:44.404 08:07:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.404 08:07:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.404 08:07:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.404 08:07:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.404 08:07:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.404 08:07:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.404 08:07:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.404 08:07:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.404 08:07:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.404 08:07:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.404 08:07:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:44.404 08:07:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:44.404 08:07:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.404 08:07:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.404 08:07:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:44.404 08:07:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:44.404 08:07:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.404 08:07:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.404 08:07:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.404 08:07:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.404 08:07:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.404 08:07:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.404 08:07:14 -- paths/export.sh@5 -- # export PATH 00:14:44.404 08:07:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.404 08:07:14 -- nvmf/common.sh@46 -- # : 0 00:14:44.404 08:07:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:44.404 08:07:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:44.404 08:07:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:44.404 08:07:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.404 08:07:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.405 08:07:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:44.405 08:07:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:44.405 08:07:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:44.405 08:07:14 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:44.405 08:07:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:44.405 08:07:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:44.405 08:07:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:44.405 08:07:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:44.405 08:07:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:44.405 08:07:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.405 08:07:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:44.405 08:07:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.405 08:07:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:44.405 08:07:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:44.405 08:07:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:44.405 08:07:14 -- common/autotest_common.sh@10 -- # set +x 00:14:52.586 08:07:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:52.586 08:07:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:52.586 08:07:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:52.586 08:07:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:52.586 08:07:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:52.586 08:07:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:52.586 08:07:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:52.586 08:07:21 -- nvmf/common.sh@294 -- # net_devs=() 00:14:52.586 08:07:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:52.586 08:07:21 -- nvmf/common.sh@295 -- # e810=() 00:14:52.586 08:07:21 -- nvmf/common.sh@295 -- # local -ga e810 00:14:52.586 08:07:21 -- nvmf/common.sh@296 -- # x722=() 00:14:52.586 08:07:21 -- nvmf/common.sh@296 -- # local -ga x722 00:14:52.586 08:07:21 -- nvmf/common.sh@297 -- # mlx=() 00:14:52.586 08:07:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:52.586 08:07:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:52.586 08:07:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:52.586 08:07:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:52.586 08:07:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:52.586 08:07:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:52.586 08:07:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:52.586 08:07:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:52.586 08:07:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:52.586 08:07:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:52.586 08:07:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:52.586 08:07:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:52.586 08:07:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:52.586 08:07:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:52.586 08:07:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:52.586 08:07:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:52.586 08:07:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:52.586 08:07:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:52.586 08:07:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:52.586 08:07:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:52.586 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:52.586 08:07:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:52.586 08:07:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:52.586 08:07:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:52.586 08:07:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:52.586 08:07:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:52.586 08:07:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:52.586 08:07:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:52.586 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:52.586 08:07:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:52.586 08:07:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:52.586 08:07:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:52.586 08:07:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:52.586 08:07:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:52.586 08:07:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:52.586 08:07:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:52.586 08:07:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:52.586 08:07:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:52.586 08:07:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:52.586 08:07:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:52.586 08:07:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:52.586 08:07:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:52.586 Found net devices under 0000:31:00.0: cvl_0_0 00:14:52.586 08:07:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:52.586 08:07:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:52.587 08:07:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:52.587 08:07:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:52.587 08:07:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:52.587 08:07:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:52.587 Found net devices under 0000:31:00.1: cvl_0_1 00:14:52.587 08:07:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:52.587 08:07:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:52.587 08:07:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:52.587 08:07:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:52.587 08:07:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:52.587 08:07:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:52.587 08:07:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.587 08:07:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:52.587 08:07:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:52.587 08:07:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:52.587 08:07:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:52.587 08:07:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:52.587 08:07:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:52.587 08:07:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:52.587 08:07:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.587 08:07:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:52.587 08:07:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:52.587 08:07:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:52.587 08:07:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:52.587 08:07:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:52.587 08:07:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:52.587 08:07:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:52.587 08:07:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:52.587 08:07:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:52.587 08:07:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:52.587 08:07:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:52.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.715 ms 00:14:52.587 00:14:52.587 --- 10.0.0.2 ping statistics --- 00:14:52.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.587 rtt min/avg/max/mdev = 0.715/0.715/0.715/0.000 ms 00:14:52.587 08:07:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:52.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:14:52.587 00:14:52.587 --- 10.0.0.1 ping statistics --- 00:14:52.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.587 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:14:52.587 08:07:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.587 08:07:22 -- nvmf/common.sh@410 -- # return 0 00:14:52.587 08:07:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:52.587 08:07:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.587 08:07:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:52.587 08:07:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:52.587 08:07:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.587 08:07:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:52.587 08:07:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:52.587 08:07:22 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:52.587 08:07:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:52.587 08:07:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:52.587 08:07:22 -- common/autotest_common.sh@10 -- # set +x 00:14:52.587 08:07:22 -- nvmf/common.sh@469 -- # nvmfpid=983876 00:14:52.587 08:07:22 -- nvmf/common.sh@470 -- # waitforlisten 983876 00:14:52.587 08:07:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:52.587 08:07:22 -- common/autotest_common.sh@819 -- # '[' -z 983876 ']' 00:14:52.587 08:07:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.587 08:07:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:52.587 08:07:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.587 08:07:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:52.587 08:07:22 -- common/autotest_common.sh@10 -- # set +x 00:14:52.587 [2024-06-11 08:07:22.275240] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:52.587 [2024-06-11 08:07:22.275298] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.587 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.587 [2024-06-11 08:07:22.362487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.587 [2024-06-11 08:07:22.453853] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:52.587 [2024-06-11 08:07:22.453999] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.587 [2024-06-11 08:07:22.454009] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.587 [2024-06-11 08:07:22.454016] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.587 [2024-06-11 08:07:22.454042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.587 08:07:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:52.587 08:07:23 -- common/autotest_common.sh@852 -- # return 0 00:14:52.587 08:07:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:52.587 08:07:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:52.587 08:07:23 -- common/autotest_common.sh@10 -- # set +x 00:14:52.587 08:07:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.587 08:07:23 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:52.587 08:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.587 08:07:23 -- common/autotest_common.sh@10 -- # set +x 00:14:52.587 [2024-06-11 08:07:23.105009] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.587 08:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.587 08:07:23 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:52.587 08:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.587 08:07:23 -- common/autotest_common.sh@10 -- # set +x 00:14:52.587 08:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.587 08:07:23 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.587 08:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.587 08:07:23 -- common/autotest_common.sh@10 -- # set +x 00:14:52.587 [2024-06-11 08:07:23.129221] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.587 08:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.587 08:07:23 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:52.587 08:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.587 08:07:23 -- common/autotest_common.sh@10 -- # set +x 00:14:52.587 NULL1 00:14:52.587 08:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.587 08:07:23 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:52.587 08:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.587 08:07:23 -- common/autotest_common.sh@10 -- # set +x 00:14:52.587 08:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.587 08:07:23 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:52.587 08:07:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.587 08:07:23 -- common/autotest_common.sh@10 -- # set +x 00:14:52.587 08:07:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.587 08:07:23 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:52.587 [2024-06-11 08:07:23.198016] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:52.587 [2024-06-11 08:07:23.198079] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid984223 ] 00:14:52.587 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.160 Attached to nqn.2016-06.io.spdk:cnode1 00:14:53.160 Namespace ID: 1 size: 1GB 00:14:53.160 fused_ordering(0) 00:14:53.160 fused_ordering(1) 00:14:53.160 fused_ordering(2) 00:14:53.160 fused_ordering(3) 00:14:53.160 fused_ordering(4) 00:14:53.160 fused_ordering(5) 00:14:53.160 fused_ordering(6) 00:14:53.160 fused_ordering(7) 00:14:53.160 fused_ordering(8) 00:14:53.160 fused_ordering(9) 00:14:53.160 fused_ordering(10) 00:14:53.160 fused_ordering(11) 00:14:53.160 fused_ordering(12) 00:14:53.160 fused_ordering(13) 00:14:53.160 fused_ordering(14) 00:14:53.160 fused_ordering(15) 00:14:53.160 fused_ordering(16) 00:14:53.160 fused_ordering(17) 00:14:53.160 fused_ordering(18) 00:14:53.160 fused_ordering(19) 00:14:53.160 fused_ordering(20) 00:14:53.160 fused_ordering(21) 00:14:53.160 fused_ordering(22) 00:14:53.160 fused_ordering(23) 00:14:53.160 fused_ordering(24) 00:14:53.160 fused_ordering(25) 00:14:53.160 fused_ordering(26) 00:14:53.160 fused_ordering(27) 00:14:53.160 fused_ordering(28) 00:14:53.160 fused_ordering(29) 00:14:53.160 fused_ordering(30) 00:14:53.160 fused_ordering(31) 00:14:53.160 fused_ordering(32) 00:14:53.160 fused_ordering(33) 00:14:53.160 fused_ordering(34) 00:14:53.160 fused_ordering(35) 00:14:53.160 fused_ordering(36) 00:14:53.160 fused_ordering(37) 00:14:53.160 fused_ordering(38) 00:14:53.160 fused_ordering(39) 00:14:53.160 fused_ordering(40) 00:14:53.160 fused_ordering(41) 00:14:53.160 fused_ordering(42) 00:14:53.160 fused_ordering(43) 00:14:53.160 fused_ordering(44) 00:14:53.160 fused_ordering(45) 00:14:53.160 fused_ordering(46) 00:14:53.160 fused_ordering(47) 00:14:53.160 fused_ordering(48) 00:14:53.160 fused_ordering(49) 00:14:53.160 fused_ordering(50) 00:14:53.160 fused_ordering(51) 00:14:53.160 fused_ordering(52) 00:14:53.160 fused_ordering(53) 00:14:53.160 fused_ordering(54) 00:14:53.160 fused_ordering(55) 00:14:53.160 fused_ordering(56) 00:14:53.160 fused_ordering(57) 00:14:53.160 fused_ordering(58) 00:14:53.160 fused_ordering(59) 00:14:53.161 fused_ordering(60) 00:14:53.161 fused_ordering(61) 00:14:53.161 fused_ordering(62) 00:14:53.161 fused_ordering(63) 00:14:53.161 fused_ordering(64) 00:14:53.161 fused_ordering(65) 00:14:53.161 fused_ordering(66) 00:14:53.161 fused_ordering(67) 00:14:53.161 fused_ordering(68) 00:14:53.161 fused_ordering(69) 00:14:53.161 fused_ordering(70) 00:14:53.161 fused_ordering(71) 00:14:53.161 fused_ordering(72) 00:14:53.161 fused_ordering(73) 00:14:53.161 fused_ordering(74) 00:14:53.161 fused_ordering(75) 00:14:53.161 fused_ordering(76) 00:14:53.161 fused_ordering(77) 00:14:53.161 fused_ordering(78) 00:14:53.161 fused_ordering(79) 00:14:53.161 fused_ordering(80) 00:14:53.161 fused_ordering(81) 00:14:53.161 fused_ordering(82) 00:14:53.161 fused_ordering(83) 00:14:53.161 fused_ordering(84) 00:14:53.161 fused_ordering(85) 00:14:53.161 fused_ordering(86) 00:14:53.161 fused_ordering(87) 00:14:53.161 fused_ordering(88) 00:14:53.161 fused_ordering(89) 00:14:53.161 fused_ordering(90) 00:14:53.161 fused_ordering(91) 00:14:53.161 fused_ordering(92) 00:14:53.161 fused_ordering(93) 00:14:53.161 fused_ordering(94) 00:14:53.161 fused_ordering(95) 00:14:53.161 fused_ordering(96) 00:14:53.161 fused_ordering(97) 00:14:53.161 fused_ordering(98) 00:14:53.161 fused_ordering(99) 00:14:53.161 fused_ordering(100) 00:14:53.161 fused_ordering(101) 00:14:53.161 fused_ordering(102) 00:14:53.161 fused_ordering(103) 00:14:53.161 fused_ordering(104) 00:14:53.161 fused_ordering(105) 00:14:53.161 fused_ordering(106) 00:14:53.161 fused_ordering(107) 00:14:53.161 fused_ordering(108) 00:14:53.161 fused_ordering(109) 00:14:53.161 fused_ordering(110) 00:14:53.161 fused_ordering(111) 00:14:53.161 fused_ordering(112) 00:14:53.161 fused_ordering(113) 00:14:53.161 fused_ordering(114) 00:14:53.161 fused_ordering(115) 00:14:53.161 fused_ordering(116) 00:14:53.161 fused_ordering(117) 00:14:53.161 fused_ordering(118) 00:14:53.161 fused_ordering(119) 00:14:53.161 fused_ordering(120) 00:14:53.161 fused_ordering(121) 00:14:53.161 fused_ordering(122) 00:14:53.161 fused_ordering(123) 00:14:53.161 fused_ordering(124) 00:14:53.161 fused_ordering(125) 00:14:53.161 fused_ordering(126) 00:14:53.161 fused_ordering(127) 00:14:53.161 fused_ordering(128) 00:14:53.161 fused_ordering(129) 00:14:53.161 fused_ordering(130) 00:14:53.161 fused_ordering(131) 00:14:53.161 fused_ordering(132) 00:14:53.161 fused_ordering(133) 00:14:53.161 fused_ordering(134) 00:14:53.161 fused_ordering(135) 00:14:53.161 fused_ordering(136) 00:14:53.161 fused_ordering(137) 00:14:53.161 fused_ordering(138) 00:14:53.161 fused_ordering(139) 00:14:53.161 fused_ordering(140) 00:14:53.161 fused_ordering(141) 00:14:53.161 fused_ordering(142) 00:14:53.161 fused_ordering(143) 00:14:53.161 fused_ordering(144) 00:14:53.161 fused_ordering(145) 00:14:53.161 fused_ordering(146) 00:14:53.161 fused_ordering(147) 00:14:53.161 fused_ordering(148) 00:14:53.161 fused_ordering(149) 00:14:53.161 fused_ordering(150) 00:14:53.161 fused_ordering(151) 00:14:53.161 fused_ordering(152) 00:14:53.161 fused_ordering(153) 00:14:53.161 fused_ordering(154) 00:14:53.161 fused_ordering(155) 00:14:53.161 fused_ordering(156) 00:14:53.161 fused_ordering(157) 00:14:53.161 fused_ordering(158) 00:14:53.161 fused_ordering(159) 00:14:53.161 fused_ordering(160) 00:14:53.161 fused_ordering(161) 00:14:53.161 fused_ordering(162) 00:14:53.161 fused_ordering(163) 00:14:53.161 fused_ordering(164) 00:14:53.161 fused_ordering(165) 00:14:53.161 fused_ordering(166) 00:14:53.161 fused_ordering(167) 00:14:53.161 fused_ordering(168) 00:14:53.161 fused_ordering(169) 00:14:53.161 fused_ordering(170) 00:14:53.161 fused_ordering(171) 00:14:53.161 fused_ordering(172) 00:14:53.161 fused_ordering(173) 00:14:53.161 fused_ordering(174) 00:14:53.161 fused_ordering(175) 00:14:53.161 fused_ordering(176) 00:14:53.161 fused_ordering(177) 00:14:53.161 fused_ordering(178) 00:14:53.161 fused_ordering(179) 00:14:53.161 fused_ordering(180) 00:14:53.161 fused_ordering(181) 00:14:53.161 fused_ordering(182) 00:14:53.161 fused_ordering(183) 00:14:53.161 fused_ordering(184) 00:14:53.161 fused_ordering(185) 00:14:53.161 fused_ordering(186) 00:14:53.161 fused_ordering(187) 00:14:53.161 fused_ordering(188) 00:14:53.161 fused_ordering(189) 00:14:53.161 fused_ordering(190) 00:14:53.161 fused_ordering(191) 00:14:53.161 fused_ordering(192) 00:14:53.161 fused_ordering(193) 00:14:53.161 fused_ordering(194) 00:14:53.161 fused_ordering(195) 00:14:53.161 fused_ordering(196) 00:14:53.161 fused_ordering(197) 00:14:53.161 fused_ordering(198) 00:14:53.161 fused_ordering(199) 00:14:53.161 fused_ordering(200) 00:14:53.161 fused_ordering(201) 00:14:53.161 fused_ordering(202) 00:14:53.161 fused_ordering(203) 00:14:53.161 fused_ordering(204) 00:14:53.161 fused_ordering(205) 00:14:53.422 fused_ordering(206) 00:14:53.422 fused_ordering(207) 00:14:53.422 fused_ordering(208) 00:14:53.422 fused_ordering(209) 00:14:53.422 fused_ordering(210) 00:14:53.422 fused_ordering(211) 00:14:53.422 fused_ordering(212) 00:14:53.422 fused_ordering(213) 00:14:53.422 fused_ordering(214) 00:14:53.422 fused_ordering(215) 00:14:53.422 fused_ordering(216) 00:14:53.422 fused_ordering(217) 00:14:53.422 fused_ordering(218) 00:14:53.422 fused_ordering(219) 00:14:53.422 fused_ordering(220) 00:14:53.422 fused_ordering(221) 00:14:53.422 fused_ordering(222) 00:14:53.422 fused_ordering(223) 00:14:53.422 fused_ordering(224) 00:14:53.422 fused_ordering(225) 00:14:53.423 fused_ordering(226) 00:14:53.423 fused_ordering(227) 00:14:53.423 fused_ordering(228) 00:14:53.423 fused_ordering(229) 00:14:53.423 fused_ordering(230) 00:14:53.423 fused_ordering(231) 00:14:53.423 fused_ordering(232) 00:14:53.423 fused_ordering(233) 00:14:53.423 fused_ordering(234) 00:14:53.423 fused_ordering(235) 00:14:53.423 fused_ordering(236) 00:14:53.423 fused_ordering(237) 00:14:53.423 fused_ordering(238) 00:14:53.423 fused_ordering(239) 00:14:53.423 fused_ordering(240) 00:14:53.423 fused_ordering(241) 00:14:53.423 fused_ordering(242) 00:14:53.423 fused_ordering(243) 00:14:53.423 fused_ordering(244) 00:14:53.423 fused_ordering(245) 00:14:53.423 fused_ordering(246) 00:14:53.423 fused_ordering(247) 00:14:53.423 fused_ordering(248) 00:14:53.423 fused_ordering(249) 00:14:53.423 fused_ordering(250) 00:14:53.423 fused_ordering(251) 00:14:53.423 fused_ordering(252) 00:14:53.423 fused_ordering(253) 00:14:53.423 fused_ordering(254) 00:14:53.423 fused_ordering(255) 00:14:53.423 fused_ordering(256) 00:14:53.423 fused_ordering(257) 00:14:53.423 fused_ordering(258) 00:14:53.423 fused_ordering(259) 00:14:53.423 fused_ordering(260) 00:14:53.423 fused_ordering(261) 00:14:53.423 fused_ordering(262) 00:14:53.423 fused_ordering(263) 00:14:53.423 fused_ordering(264) 00:14:53.423 fused_ordering(265) 00:14:53.423 fused_ordering(266) 00:14:53.423 fused_ordering(267) 00:14:53.423 fused_ordering(268) 00:14:53.423 fused_ordering(269) 00:14:53.423 fused_ordering(270) 00:14:53.423 fused_ordering(271) 00:14:53.423 fused_ordering(272) 00:14:53.423 fused_ordering(273) 00:14:53.423 fused_ordering(274) 00:14:53.423 fused_ordering(275) 00:14:53.423 fused_ordering(276) 00:14:53.423 fused_ordering(277) 00:14:53.423 fused_ordering(278) 00:14:53.423 fused_ordering(279) 00:14:53.423 fused_ordering(280) 00:14:53.423 fused_ordering(281) 00:14:53.423 fused_ordering(282) 00:14:53.423 fused_ordering(283) 00:14:53.423 fused_ordering(284) 00:14:53.423 fused_ordering(285) 00:14:53.423 fused_ordering(286) 00:14:53.423 fused_ordering(287) 00:14:53.423 fused_ordering(288) 00:14:53.423 fused_ordering(289) 00:14:53.423 fused_ordering(290) 00:14:53.423 fused_ordering(291) 00:14:53.423 fused_ordering(292) 00:14:53.423 fused_ordering(293) 00:14:53.423 fused_ordering(294) 00:14:53.423 fused_ordering(295) 00:14:53.423 fused_ordering(296) 00:14:53.423 fused_ordering(297) 00:14:53.423 fused_ordering(298) 00:14:53.423 fused_ordering(299) 00:14:53.423 fused_ordering(300) 00:14:53.423 fused_ordering(301) 00:14:53.423 fused_ordering(302) 00:14:53.423 fused_ordering(303) 00:14:53.423 fused_ordering(304) 00:14:53.423 fused_ordering(305) 00:14:53.423 fused_ordering(306) 00:14:53.423 fused_ordering(307) 00:14:53.423 fused_ordering(308) 00:14:53.423 fused_ordering(309) 00:14:53.423 fused_ordering(310) 00:14:53.423 fused_ordering(311) 00:14:53.423 fused_ordering(312) 00:14:53.423 fused_ordering(313) 00:14:53.423 fused_ordering(314) 00:14:53.423 fused_ordering(315) 00:14:53.423 fused_ordering(316) 00:14:53.423 fused_ordering(317) 00:14:53.423 fused_ordering(318) 00:14:53.423 fused_ordering(319) 00:14:53.423 fused_ordering(320) 00:14:53.423 fused_ordering(321) 00:14:53.423 fused_ordering(322) 00:14:53.423 fused_ordering(323) 00:14:53.423 fused_ordering(324) 00:14:53.423 fused_ordering(325) 00:14:53.423 fused_ordering(326) 00:14:53.423 fused_ordering(327) 00:14:53.423 fused_ordering(328) 00:14:53.423 fused_ordering(329) 00:14:53.423 fused_ordering(330) 00:14:53.423 fused_ordering(331) 00:14:53.423 fused_ordering(332) 00:14:53.423 fused_ordering(333) 00:14:53.423 fused_ordering(334) 00:14:53.423 fused_ordering(335) 00:14:53.423 fused_ordering(336) 00:14:53.423 fused_ordering(337) 00:14:53.423 fused_ordering(338) 00:14:53.423 fused_ordering(339) 00:14:53.423 fused_ordering(340) 00:14:53.423 fused_ordering(341) 00:14:53.423 fused_ordering(342) 00:14:53.423 fused_ordering(343) 00:14:53.423 fused_ordering(344) 00:14:53.423 fused_ordering(345) 00:14:53.423 fused_ordering(346) 00:14:53.423 fused_ordering(347) 00:14:53.423 fused_ordering(348) 00:14:53.423 fused_ordering(349) 00:14:53.423 fused_ordering(350) 00:14:53.423 fused_ordering(351) 00:14:53.423 fused_ordering(352) 00:14:53.423 fused_ordering(353) 00:14:53.423 fused_ordering(354) 00:14:53.423 fused_ordering(355) 00:14:53.423 fused_ordering(356) 00:14:53.423 fused_ordering(357) 00:14:53.423 fused_ordering(358) 00:14:53.423 fused_ordering(359) 00:14:53.423 fused_ordering(360) 00:14:53.423 fused_ordering(361) 00:14:53.423 fused_ordering(362) 00:14:53.423 fused_ordering(363) 00:14:53.423 fused_ordering(364) 00:14:53.423 fused_ordering(365) 00:14:53.423 fused_ordering(366) 00:14:53.423 fused_ordering(367) 00:14:53.423 fused_ordering(368) 00:14:53.423 fused_ordering(369) 00:14:53.423 fused_ordering(370) 00:14:53.423 fused_ordering(371) 00:14:53.423 fused_ordering(372) 00:14:53.423 fused_ordering(373) 00:14:53.423 fused_ordering(374) 00:14:53.423 fused_ordering(375) 00:14:53.423 fused_ordering(376) 00:14:53.423 fused_ordering(377) 00:14:53.423 fused_ordering(378) 00:14:53.423 fused_ordering(379) 00:14:53.423 fused_ordering(380) 00:14:53.423 fused_ordering(381) 00:14:53.423 fused_ordering(382) 00:14:53.423 fused_ordering(383) 00:14:53.423 fused_ordering(384) 00:14:53.423 fused_ordering(385) 00:14:53.423 fused_ordering(386) 00:14:53.423 fused_ordering(387) 00:14:53.423 fused_ordering(388) 00:14:53.423 fused_ordering(389) 00:14:53.423 fused_ordering(390) 00:14:53.423 fused_ordering(391) 00:14:53.423 fused_ordering(392) 00:14:53.423 fused_ordering(393) 00:14:53.423 fused_ordering(394) 00:14:53.423 fused_ordering(395) 00:14:53.423 fused_ordering(396) 00:14:53.423 fused_ordering(397) 00:14:53.423 fused_ordering(398) 00:14:53.423 fused_ordering(399) 00:14:53.423 fused_ordering(400) 00:14:53.423 fused_ordering(401) 00:14:53.423 fused_ordering(402) 00:14:53.423 fused_ordering(403) 00:14:53.423 fused_ordering(404) 00:14:53.423 fused_ordering(405) 00:14:53.423 fused_ordering(406) 00:14:53.423 fused_ordering(407) 00:14:53.423 fused_ordering(408) 00:14:53.423 fused_ordering(409) 00:14:53.423 fused_ordering(410) 00:14:53.995 fused_ordering(411) 00:14:53.995 fused_ordering(412) 00:14:53.995 fused_ordering(413) 00:14:53.995 fused_ordering(414) 00:14:53.995 fused_ordering(415) 00:14:53.995 fused_ordering(416) 00:14:53.995 fused_ordering(417) 00:14:53.995 fused_ordering(418) 00:14:53.995 fused_ordering(419) 00:14:53.995 fused_ordering(420) 00:14:53.995 fused_ordering(421) 00:14:53.995 fused_ordering(422) 00:14:53.995 fused_ordering(423) 00:14:53.995 fused_ordering(424) 00:14:53.995 fused_ordering(425) 00:14:53.995 fused_ordering(426) 00:14:53.995 fused_ordering(427) 00:14:53.995 fused_ordering(428) 00:14:53.995 fused_ordering(429) 00:14:53.995 fused_ordering(430) 00:14:53.995 fused_ordering(431) 00:14:53.995 fused_ordering(432) 00:14:53.995 fused_ordering(433) 00:14:53.995 fused_ordering(434) 00:14:53.995 fused_ordering(435) 00:14:53.995 fused_ordering(436) 00:14:53.995 fused_ordering(437) 00:14:53.995 fused_ordering(438) 00:14:53.995 fused_ordering(439) 00:14:53.995 fused_ordering(440) 00:14:53.995 fused_ordering(441) 00:14:53.995 fused_ordering(442) 00:14:53.995 fused_ordering(443) 00:14:53.995 fused_ordering(444) 00:14:53.995 fused_ordering(445) 00:14:53.995 fused_ordering(446) 00:14:53.995 fused_ordering(447) 00:14:53.995 fused_ordering(448) 00:14:53.995 fused_ordering(449) 00:14:53.995 fused_ordering(450) 00:14:53.995 fused_ordering(451) 00:14:53.995 fused_ordering(452) 00:14:53.995 fused_ordering(453) 00:14:53.995 fused_ordering(454) 00:14:53.995 fused_ordering(455) 00:14:53.995 fused_ordering(456) 00:14:53.995 fused_ordering(457) 00:14:53.995 fused_ordering(458) 00:14:53.995 fused_ordering(459) 00:14:53.995 fused_ordering(460) 00:14:53.995 fused_ordering(461) 00:14:53.995 fused_ordering(462) 00:14:53.995 fused_ordering(463) 00:14:53.995 fused_ordering(464) 00:14:53.995 fused_ordering(465) 00:14:53.995 fused_ordering(466) 00:14:53.995 fused_ordering(467) 00:14:53.995 fused_ordering(468) 00:14:53.995 fused_ordering(469) 00:14:53.995 fused_ordering(470) 00:14:53.995 fused_ordering(471) 00:14:53.995 fused_ordering(472) 00:14:53.995 fused_ordering(473) 00:14:53.995 fused_ordering(474) 00:14:53.995 fused_ordering(475) 00:14:53.995 fused_ordering(476) 00:14:53.995 fused_ordering(477) 00:14:53.995 fused_ordering(478) 00:14:53.995 fused_ordering(479) 00:14:53.995 fused_ordering(480) 00:14:53.995 fused_ordering(481) 00:14:53.995 fused_ordering(482) 00:14:53.995 fused_ordering(483) 00:14:53.995 fused_ordering(484) 00:14:53.995 fused_ordering(485) 00:14:53.995 fused_ordering(486) 00:14:53.995 fused_ordering(487) 00:14:53.995 fused_ordering(488) 00:14:53.995 fused_ordering(489) 00:14:53.995 fused_ordering(490) 00:14:53.995 fused_ordering(491) 00:14:53.995 fused_ordering(492) 00:14:53.995 fused_ordering(493) 00:14:53.995 fused_ordering(494) 00:14:53.995 fused_ordering(495) 00:14:53.995 fused_ordering(496) 00:14:53.995 fused_ordering(497) 00:14:53.995 fused_ordering(498) 00:14:53.995 fused_ordering(499) 00:14:53.995 fused_ordering(500) 00:14:53.995 fused_ordering(501) 00:14:53.995 fused_ordering(502) 00:14:53.995 fused_ordering(503) 00:14:53.995 fused_ordering(504) 00:14:53.995 fused_ordering(505) 00:14:53.995 fused_ordering(506) 00:14:53.995 fused_ordering(507) 00:14:53.995 fused_ordering(508) 00:14:53.995 fused_ordering(509) 00:14:53.995 fused_ordering(510) 00:14:53.995 fused_ordering(511) 00:14:53.995 fused_ordering(512) 00:14:53.995 fused_ordering(513) 00:14:53.995 fused_ordering(514) 00:14:53.995 fused_ordering(515) 00:14:53.995 fused_ordering(516) 00:14:53.995 fused_ordering(517) 00:14:53.995 fused_ordering(518) 00:14:53.995 fused_ordering(519) 00:14:53.995 fused_ordering(520) 00:14:53.995 fused_ordering(521) 00:14:53.995 fused_ordering(522) 00:14:53.995 fused_ordering(523) 00:14:53.995 fused_ordering(524) 00:14:53.995 fused_ordering(525) 00:14:53.995 fused_ordering(526) 00:14:53.995 fused_ordering(527) 00:14:53.995 fused_ordering(528) 00:14:53.995 fused_ordering(529) 00:14:53.995 fused_ordering(530) 00:14:53.995 fused_ordering(531) 00:14:53.995 fused_ordering(532) 00:14:53.995 fused_ordering(533) 00:14:53.995 fused_ordering(534) 00:14:53.995 fused_ordering(535) 00:14:53.995 fused_ordering(536) 00:14:53.995 fused_ordering(537) 00:14:53.995 fused_ordering(538) 00:14:53.995 fused_ordering(539) 00:14:53.995 fused_ordering(540) 00:14:53.995 fused_ordering(541) 00:14:53.995 fused_ordering(542) 00:14:53.995 fused_ordering(543) 00:14:53.995 fused_ordering(544) 00:14:53.995 fused_ordering(545) 00:14:53.995 fused_ordering(546) 00:14:53.995 fused_ordering(547) 00:14:53.995 fused_ordering(548) 00:14:53.995 fused_ordering(549) 00:14:53.995 fused_ordering(550) 00:14:53.995 fused_ordering(551) 00:14:53.995 fused_ordering(552) 00:14:53.995 fused_ordering(553) 00:14:53.995 fused_ordering(554) 00:14:53.995 fused_ordering(555) 00:14:53.995 fused_ordering(556) 00:14:53.995 fused_ordering(557) 00:14:53.995 fused_ordering(558) 00:14:53.995 fused_ordering(559) 00:14:53.995 fused_ordering(560) 00:14:53.995 fused_ordering(561) 00:14:53.995 fused_ordering(562) 00:14:53.995 fused_ordering(563) 00:14:53.995 fused_ordering(564) 00:14:53.995 fused_ordering(565) 00:14:53.995 fused_ordering(566) 00:14:53.995 fused_ordering(567) 00:14:53.995 fused_ordering(568) 00:14:53.995 fused_ordering(569) 00:14:53.995 fused_ordering(570) 00:14:53.995 fused_ordering(571) 00:14:53.995 fused_ordering(572) 00:14:53.995 fused_ordering(573) 00:14:53.995 fused_ordering(574) 00:14:53.995 fused_ordering(575) 00:14:53.995 fused_ordering(576) 00:14:53.995 fused_ordering(577) 00:14:53.995 fused_ordering(578) 00:14:53.995 fused_ordering(579) 00:14:53.995 fused_ordering(580) 00:14:53.995 fused_ordering(581) 00:14:53.995 fused_ordering(582) 00:14:53.995 fused_ordering(583) 00:14:53.995 fused_ordering(584) 00:14:53.995 fused_ordering(585) 00:14:53.995 fused_ordering(586) 00:14:53.995 fused_ordering(587) 00:14:53.995 fused_ordering(588) 00:14:53.995 fused_ordering(589) 00:14:53.995 fused_ordering(590) 00:14:53.995 fused_ordering(591) 00:14:53.995 fused_ordering(592) 00:14:53.995 fused_ordering(593) 00:14:53.995 fused_ordering(594) 00:14:53.995 fused_ordering(595) 00:14:53.995 fused_ordering(596) 00:14:53.995 fused_ordering(597) 00:14:53.995 fused_ordering(598) 00:14:53.995 fused_ordering(599) 00:14:53.995 fused_ordering(600) 00:14:53.995 fused_ordering(601) 00:14:53.995 fused_ordering(602) 00:14:53.995 fused_ordering(603) 00:14:53.995 fused_ordering(604) 00:14:53.995 fused_ordering(605) 00:14:53.995 fused_ordering(606) 00:14:53.995 fused_ordering(607) 00:14:53.995 fused_ordering(608) 00:14:53.995 fused_ordering(609) 00:14:53.995 fused_ordering(610) 00:14:53.995 fused_ordering(611) 00:14:53.995 fused_ordering(612) 00:14:53.995 fused_ordering(613) 00:14:53.995 fused_ordering(614) 00:14:53.995 fused_ordering(615) 00:14:54.255 fused_ordering(616) 00:14:54.255 fused_ordering(617) 00:14:54.255 fused_ordering(618) 00:14:54.255 fused_ordering(619) 00:14:54.255 fused_ordering(620) 00:14:54.255 fused_ordering(621) 00:14:54.255 fused_ordering(622) 00:14:54.255 fused_ordering(623) 00:14:54.255 fused_ordering(624) 00:14:54.255 fused_ordering(625) 00:14:54.255 fused_ordering(626) 00:14:54.255 fused_ordering(627) 00:14:54.255 fused_ordering(628) 00:14:54.255 fused_ordering(629) 00:14:54.255 fused_ordering(630) 00:14:54.255 fused_ordering(631) 00:14:54.255 fused_ordering(632) 00:14:54.255 fused_ordering(633) 00:14:54.255 fused_ordering(634) 00:14:54.255 fused_ordering(635) 00:14:54.255 fused_ordering(636) 00:14:54.255 fused_ordering(637) 00:14:54.255 fused_ordering(638) 00:14:54.255 fused_ordering(639) 00:14:54.255 fused_ordering(640) 00:14:54.255 fused_ordering(641) 00:14:54.255 fused_ordering(642) 00:14:54.255 fused_ordering(643) 00:14:54.255 fused_ordering(644) 00:14:54.255 fused_ordering(645) 00:14:54.255 fused_ordering(646) 00:14:54.255 fused_ordering(647) 00:14:54.255 fused_ordering(648) 00:14:54.255 fused_ordering(649) 00:14:54.255 fused_ordering(650) 00:14:54.255 fused_ordering(651) 00:14:54.255 fused_ordering(652) 00:14:54.255 fused_ordering(653) 00:14:54.255 fused_ordering(654) 00:14:54.255 fused_ordering(655) 00:14:54.255 fused_ordering(656) 00:14:54.255 fused_ordering(657) 00:14:54.256 fused_ordering(658) 00:14:54.256 fused_ordering(659) 00:14:54.256 fused_ordering(660) 00:14:54.256 fused_ordering(661) 00:14:54.256 fused_ordering(662) 00:14:54.256 fused_ordering(663) 00:14:54.256 fused_ordering(664) 00:14:54.256 fused_ordering(665) 00:14:54.256 fused_ordering(666) 00:14:54.256 fused_ordering(667) 00:14:54.256 fused_ordering(668) 00:14:54.256 fused_ordering(669) 00:14:54.256 fused_ordering(670) 00:14:54.256 fused_ordering(671) 00:14:54.256 fused_ordering(672) 00:14:54.256 fused_ordering(673) 00:14:54.256 fused_ordering(674) 00:14:54.256 fused_ordering(675) 00:14:54.256 fused_ordering(676) 00:14:54.256 fused_ordering(677) 00:14:54.256 fused_ordering(678) 00:14:54.256 fused_ordering(679) 00:14:54.256 fused_ordering(680) 00:14:54.256 fused_ordering(681) 00:14:54.256 fused_ordering(682) 00:14:54.256 fused_ordering(683) 00:14:54.256 fused_ordering(684) 00:14:54.256 fused_ordering(685) 00:14:54.256 fused_ordering(686) 00:14:54.256 fused_ordering(687) 00:14:54.256 fused_ordering(688) 00:14:54.256 fused_ordering(689) 00:14:54.256 fused_ordering(690) 00:14:54.256 fused_ordering(691) 00:14:54.256 fused_ordering(692) 00:14:54.256 fused_ordering(693) 00:14:54.256 fused_ordering(694) 00:14:54.256 fused_ordering(695) 00:14:54.256 fused_ordering(696) 00:14:54.256 fused_ordering(697) 00:14:54.256 fused_ordering(698) 00:14:54.256 fused_ordering(699) 00:14:54.256 fused_ordering(700) 00:14:54.256 fused_ordering(701) 00:14:54.256 fused_ordering(702) 00:14:54.256 fused_ordering(703) 00:14:54.256 fused_ordering(704) 00:14:54.256 fused_ordering(705) 00:14:54.256 fused_ordering(706) 00:14:54.256 fused_ordering(707) 00:14:54.256 fused_ordering(708) 00:14:54.256 fused_ordering(709) 00:14:54.256 fused_ordering(710) 00:14:54.256 fused_ordering(711) 00:14:54.256 fused_ordering(712) 00:14:54.256 fused_ordering(713) 00:14:54.256 fused_ordering(714) 00:14:54.256 fused_ordering(715) 00:14:54.256 fused_ordering(716) 00:14:54.256 fused_ordering(717) 00:14:54.256 fused_ordering(718) 00:14:54.256 fused_ordering(719) 00:14:54.256 fused_ordering(720) 00:14:54.256 fused_ordering(721) 00:14:54.256 fused_ordering(722) 00:14:54.256 fused_ordering(723) 00:14:54.256 fused_ordering(724) 00:14:54.256 fused_ordering(725) 00:14:54.256 fused_ordering(726) 00:14:54.256 fused_ordering(727) 00:14:54.256 fused_ordering(728) 00:14:54.256 fused_ordering(729) 00:14:54.256 fused_ordering(730) 00:14:54.256 fused_ordering(731) 00:14:54.256 fused_ordering(732) 00:14:54.256 fused_ordering(733) 00:14:54.256 fused_ordering(734) 00:14:54.256 fused_ordering(735) 00:14:54.256 fused_ordering(736) 00:14:54.256 fused_ordering(737) 00:14:54.256 fused_ordering(738) 00:14:54.256 fused_ordering(739) 00:14:54.256 fused_ordering(740) 00:14:54.256 fused_ordering(741) 00:14:54.256 fused_ordering(742) 00:14:54.256 fused_ordering(743) 00:14:54.256 fused_ordering(744) 00:14:54.256 fused_ordering(745) 00:14:54.256 fused_ordering(746) 00:14:54.256 fused_ordering(747) 00:14:54.256 fused_ordering(748) 00:14:54.256 fused_ordering(749) 00:14:54.256 fused_ordering(750) 00:14:54.256 fused_ordering(751) 00:14:54.256 fused_ordering(752) 00:14:54.256 fused_ordering(753) 00:14:54.256 fused_ordering(754) 00:14:54.256 fused_ordering(755) 00:14:54.256 fused_ordering(756) 00:14:54.256 fused_ordering(757) 00:14:54.256 fused_ordering(758) 00:14:54.256 fused_ordering(759) 00:14:54.256 fused_ordering(760) 00:14:54.256 fused_ordering(761) 00:14:54.256 fused_ordering(762) 00:14:54.256 fused_ordering(763) 00:14:54.256 fused_ordering(764) 00:14:54.256 fused_ordering(765) 00:14:54.256 fused_ordering(766) 00:14:54.256 fused_ordering(767) 00:14:54.256 fused_ordering(768) 00:14:54.256 fused_ordering(769) 00:14:54.256 fused_ordering(770) 00:14:54.256 fused_ordering(771) 00:14:54.256 fused_ordering(772) 00:14:54.256 fused_ordering(773) 00:14:54.256 fused_ordering(774) 00:14:54.256 fused_ordering(775) 00:14:54.256 fused_ordering(776) 00:14:54.256 fused_ordering(777) 00:14:54.256 fused_ordering(778) 00:14:54.256 fused_ordering(779) 00:14:54.256 fused_ordering(780) 00:14:54.256 fused_ordering(781) 00:14:54.256 fused_ordering(782) 00:14:54.256 fused_ordering(783) 00:14:54.256 fused_ordering(784) 00:14:54.256 fused_ordering(785) 00:14:54.256 fused_ordering(786) 00:14:54.256 fused_ordering(787) 00:14:54.256 fused_ordering(788) 00:14:54.256 fused_ordering(789) 00:14:54.256 fused_ordering(790) 00:14:54.256 fused_ordering(791) 00:14:54.256 fused_ordering(792) 00:14:54.256 fused_ordering(793) 00:14:54.256 fused_ordering(794) 00:14:54.256 fused_ordering(795) 00:14:54.256 fused_ordering(796) 00:14:54.256 fused_ordering(797) 00:14:54.256 fused_ordering(798) 00:14:54.256 fused_ordering(799) 00:14:54.256 fused_ordering(800) 00:14:54.256 fused_ordering(801) 00:14:54.256 fused_ordering(802) 00:14:54.256 fused_ordering(803) 00:14:54.256 fused_ordering(804) 00:14:54.256 fused_ordering(805) 00:14:54.256 fused_ordering(806) 00:14:54.256 fused_ordering(807) 00:14:54.256 fused_ordering(808) 00:14:54.256 fused_ordering(809) 00:14:54.256 fused_ordering(810) 00:14:54.256 fused_ordering(811) 00:14:54.256 fused_ordering(812) 00:14:54.256 fused_ordering(813) 00:14:54.256 fused_ordering(814) 00:14:54.256 fused_ordering(815) 00:14:54.256 fused_ordering(816) 00:14:54.256 fused_ordering(817) 00:14:54.256 fused_ordering(818) 00:14:54.256 fused_ordering(819) 00:14:54.256 fused_ordering(820) 00:14:54.829 fused_ordering(821) 00:14:54.829 fused_ordering(822) 00:14:54.829 fused_ordering(823) 00:14:54.829 fused_ordering(824) 00:14:54.829 fused_ordering(825) 00:14:54.829 fused_ordering(826) 00:14:54.829 fused_ordering(827) 00:14:54.829 fused_ordering(828) 00:14:54.829 fused_ordering(829) 00:14:54.829 fused_ordering(830) 00:14:54.829 fused_ordering(831) 00:14:54.829 fused_ordering(832) 00:14:54.829 fused_ordering(833) 00:14:54.829 fused_ordering(834) 00:14:54.829 fused_ordering(835) 00:14:54.829 fused_ordering(836) 00:14:54.829 fused_ordering(837) 00:14:54.829 fused_ordering(838) 00:14:54.829 fused_ordering(839) 00:14:54.829 fused_ordering(840) 00:14:54.829 fused_ordering(841) 00:14:54.829 fused_ordering(842) 00:14:54.829 fused_ordering(843) 00:14:54.829 fused_ordering(844) 00:14:54.829 fused_ordering(845) 00:14:54.829 fused_ordering(846) 00:14:54.829 fused_ordering(847) 00:14:54.829 fused_ordering(848) 00:14:54.829 fused_ordering(849) 00:14:54.829 fused_ordering(850) 00:14:54.829 fused_ordering(851) 00:14:54.829 fused_ordering(852) 00:14:54.829 fused_ordering(853) 00:14:54.829 fused_ordering(854) 00:14:54.829 fused_ordering(855) 00:14:54.829 fused_ordering(856) 00:14:54.829 fused_ordering(857) 00:14:54.829 fused_ordering(858) 00:14:54.829 fused_ordering(859) 00:14:54.829 fused_ordering(860) 00:14:54.829 fused_ordering(861) 00:14:54.829 fused_ordering(862) 00:14:54.829 fused_ordering(863) 00:14:54.829 fused_ordering(864) 00:14:54.829 fused_ordering(865) 00:14:54.829 fused_ordering(866) 00:14:54.829 fused_ordering(867) 00:14:54.829 fused_ordering(868) 00:14:54.829 fused_ordering(869) 00:14:54.829 fused_ordering(870) 00:14:54.829 fused_ordering(871) 00:14:54.829 fused_ordering(872) 00:14:54.829 fused_ordering(873) 00:14:54.829 fused_ordering(874) 00:14:54.829 fused_ordering(875) 00:14:54.829 fused_ordering(876) 00:14:54.829 fused_ordering(877) 00:14:54.829 fused_ordering(878) 00:14:54.829 fused_ordering(879) 00:14:54.829 fused_ordering(880) 00:14:54.829 fused_ordering(881) 00:14:54.829 fused_ordering(882) 00:14:54.829 fused_ordering(883) 00:14:54.829 fused_ordering(884) 00:14:54.829 fused_ordering(885) 00:14:54.829 fused_ordering(886) 00:14:54.829 fused_ordering(887) 00:14:54.829 fused_ordering(888) 00:14:54.829 fused_ordering(889) 00:14:54.829 fused_ordering(890) 00:14:54.829 fused_ordering(891) 00:14:54.829 fused_ordering(892) 00:14:54.829 fused_ordering(893) 00:14:54.829 fused_ordering(894) 00:14:54.829 fused_ordering(895) 00:14:54.829 fused_ordering(896) 00:14:54.829 fused_ordering(897) 00:14:54.829 fused_ordering(898) 00:14:54.829 fused_ordering(899) 00:14:54.829 fused_ordering(900) 00:14:54.829 fused_ordering(901) 00:14:54.829 fused_ordering(902) 00:14:54.829 fused_ordering(903) 00:14:54.829 fused_ordering(904) 00:14:54.829 fused_ordering(905) 00:14:54.829 fused_ordering(906) 00:14:54.829 fused_ordering(907) 00:14:54.829 fused_ordering(908) 00:14:54.829 fused_ordering(909) 00:14:54.829 fused_ordering(910) 00:14:54.829 fused_ordering(911) 00:14:54.829 fused_ordering(912) 00:14:54.829 fused_ordering(913) 00:14:54.829 fused_ordering(914) 00:14:54.829 fused_ordering(915) 00:14:54.829 fused_ordering(916) 00:14:54.829 fused_ordering(917) 00:14:54.829 fused_ordering(918) 00:14:54.829 fused_ordering(919) 00:14:54.829 fused_ordering(920) 00:14:54.829 fused_ordering(921) 00:14:54.829 fused_ordering(922) 00:14:54.829 fused_ordering(923) 00:14:54.829 fused_ordering(924) 00:14:54.829 fused_ordering(925) 00:14:54.829 fused_ordering(926) 00:14:54.829 fused_ordering(927) 00:14:54.829 fused_ordering(928) 00:14:54.829 fused_ordering(929) 00:14:54.829 fused_ordering(930) 00:14:54.829 fused_ordering(931) 00:14:54.829 fused_ordering(932) 00:14:54.829 fused_ordering(933) 00:14:54.829 fused_ordering(934) 00:14:54.829 fused_ordering(935) 00:14:54.829 fused_ordering(936) 00:14:54.829 fused_ordering(937) 00:14:54.829 fused_ordering(938) 00:14:54.829 fused_ordering(939) 00:14:54.829 fused_ordering(940) 00:14:54.829 fused_ordering(941) 00:14:54.829 fused_ordering(942) 00:14:54.829 fused_ordering(943) 00:14:54.829 fused_ordering(944) 00:14:54.829 fused_ordering(945) 00:14:54.829 fused_ordering(946) 00:14:54.829 fused_ordering(947) 00:14:54.829 fused_ordering(948) 00:14:54.829 fused_ordering(949) 00:14:54.829 fused_ordering(950) 00:14:54.829 fused_ordering(951) 00:14:54.829 fused_ordering(952) 00:14:54.829 fused_ordering(953) 00:14:54.829 fused_ordering(954) 00:14:54.829 fused_ordering(955) 00:14:54.829 fused_ordering(956) 00:14:54.829 fused_ordering(957) 00:14:54.829 fused_ordering(958) 00:14:54.829 fused_ordering(959) 00:14:54.829 fused_ordering(960) 00:14:54.829 fused_ordering(961) 00:14:54.829 fused_ordering(962) 00:14:54.829 fused_ordering(963) 00:14:54.829 fused_ordering(964) 00:14:54.829 fused_ordering(965) 00:14:54.829 fused_ordering(966) 00:14:54.829 fused_ordering(967) 00:14:54.829 fused_ordering(968) 00:14:54.829 fused_ordering(969) 00:14:54.829 fused_ordering(970) 00:14:54.829 fused_ordering(971) 00:14:54.829 fused_ordering(972) 00:14:54.829 fused_ordering(973) 00:14:54.829 fused_ordering(974) 00:14:54.829 fused_ordering(975) 00:14:54.829 fused_ordering(976) 00:14:54.829 fused_ordering(977) 00:14:54.829 fused_ordering(978) 00:14:54.829 fused_ordering(979) 00:14:54.830 fused_ordering(980) 00:14:54.830 fused_ordering(981) 00:14:54.830 fused_ordering(982) 00:14:54.830 fused_ordering(983) 00:14:54.830 fused_ordering(984) 00:14:54.830 fused_ordering(985) 00:14:54.830 fused_ordering(986) 00:14:54.830 fused_ordering(987) 00:14:54.830 fused_ordering(988) 00:14:54.830 fused_ordering(989) 00:14:54.830 fused_ordering(990) 00:14:54.830 fused_ordering(991) 00:14:54.830 fused_ordering(992) 00:14:54.830 fused_ordering(993) 00:14:54.830 fused_ordering(994) 00:14:54.830 fused_ordering(995) 00:14:54.830 fused_ordering(996) 00:14:54.830 fused_ordering(997) 00:14:54.830 fused_ordering(998) 00:14:54.830 fused_ordering(999) 00:14:54.830 fused_ordering(1000) 00:14:54.830 fused_ordering(1001) 00:14:54.830 fused_ordering(1002) 00:14:54.830 fused_ordering(1003) 00:14:54.830 fused_ordering(1004) 00:14:54.830 fused_ordering(1005) 00:14:54.830 fused_ordering(1006) 00:14:54.830 fused_ordering(1007) 00:14:54.830 fused_ordering(1008) 00:14:54.830 fused_ordering(1009) 00:14:54.830 fused_ordering(1010) 00:14:54.830 fused_ordering(1011) 00:14:54.830 fused_ordering(1012) 00:14:54.830 fused_ordering(1013) 00:14:54.830 fused_ordering(1014) 00:14:54.830 fused_ordering(1015) 00:14:54.830 fused_ordering(1016) 00:14:54.830 fused_ordering(1017) 00:14:54.830 fused_ordering(1018) 00:14:54.830 fused_ordering(1019) 00:14:54.830 fused_ordering(1020) 00:14:54.830 fused_ordering(1021) 00:14:54.830 fused_ordering(1022) 00:14:54.830 fused_ordering(1023) 00:14:54.830 08:07:25 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:54.830 08:07:25 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:54.830 08:07:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:54.830 08:07:25 -- nvmf/common.sh@116 -- # sync 00:14:54.830 08:07:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:54.830 08:07:25 -- nvmf/common.sh@119 -- # set +e 00:14:54.830 08:07:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:54.830 08:07:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:54.830 rmmod nvme_tcp 00:14:54.830 rmmod nvme_fabrics 00:14:54.830 rmmod nvme_keyring 00:14:55.091 08:07:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:55.091 08:07:25 -- nvmf/common.sh@123 -- # set -e 00:14:55.091 08:07:25 -- nvmf/common.sh@124 -- # return 0 00:14:55.091 08:07:25 -- nvmf/common.sh@477 -- # '[' -n 983876 ']' 00:14:55.091 08:07:25 -- nvmf/common.sh@478 -- # killprocess 983876 00:14:55.091 08:07:25 -- common/autotest_common.sh@926 -- # '[' -z 983876 ']' 00:14:55.091 08:07:25 -- common/autotest_common.sh@930 -- # kill -0 983876 00:14:55.091 08:07:25 -- common/autotest_common.sh@931 -- # uname 00:14:55.091 08:07:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:55.091 08:07:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 983876 00:14:55.091 08:07:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:55.091 08:07:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:55.091 08:07:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 983876' 00:14:55.091 killing process with pid 983876 00:14:55.091 08:07:25 -- common/autotest_common.sh@945 -- # kill 983876 00:14:55.091 08:07:25 -- common/autotest_common.sh@950 -- # wait 983876 00:14:55.091 08:07:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:55.091 08:07:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:55.091 08:07:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:55.091 08:07:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:55.091 08:07:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:55.091 08:07:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.091 08:07:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.091 08:07:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.638 08:07:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:57.638 00:14:57.638 real 0m12.947s 00:14:57.638 user 0m6.881s 00:14:57.638 sys 0m6.667s 00:14:57.638 08:07:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:57.638 08:07:27 -- common/autotest_common.sh@10 -- # set +x 00:14:57.638 ************************************ 00:14:57.638 END TEST nvmf_fused_ordering 00:14:57.638 ************************************ 00:14:57.638 08:07:27 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:57.638 08:07:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:57.638 08:07:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:57.638 08:07:27 -- common/autotest_common.sh@10 -- # set +x 00:14:57.639 ************************************ 00:14:57.639 START TEST nvmf_delete_subsystem 00:14:57.639 ************************************ 00:14:57.639 08:07:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:57.639 * Looking for test storage... 00:14:57.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:57.639 08:07:27 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:57.639 08:07:27 -- nvmf/common.sh@7 -- # uname -s 00:14:57.639 08:07:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.639 08:07:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.639 08:07:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.639 08:07:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.639 08:07:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.639 08:07:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.639 08:07:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.639 08:07:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.639 08:07:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.639 08:07:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.639 08:07:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:57.639 08:07:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:57.639 08:07:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.639 08:07:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.639 08:07:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:57.639 08:07:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:57.639 08:07:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.639 08:07:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.639 08:07:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.639 08:07:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.639 08:07:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.639 08:07:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.639 08:07:27 -- paths/export.sh@5 -- # export PATH 00:14:57.639 08:07:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.639 08:07:27 -- nvmf/common.sh@46 -- # : 0 00:14:57.639 08:07:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:57.639 08:07:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:57.639 08:07:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:57.639 08:07:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.639 08:07:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.639 08:07:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:57.639 08:07:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:57.639 08:07:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:57.639 08:07:27 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:57.639 08:07:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:57.639 08:07:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:57.639 08:07:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:57.639 08:07:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:57.639 08:07:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:57.639 08:07:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.639 08:07:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.639 08:07:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.639 08:07:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:57.639 08:07:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:57.639 08:07:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:57.639 08:07:27 -- common/autotest_common.sh@10 -- # set +x 00:15:04.228 08:07:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:04.228 08:07:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:04.228 08:07:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:04.228 08:07:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:04.228 08:07:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:04.228 08:07:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:04.228 08:07:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:04.228 08:07:34 -- nvmf/common.sh@294 -- # net_devs=() 00:15:04.228 08:07:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:04.228 08:07:34 -- nvmf/common.sh@295 -- # e810=() 00:15:04.228 08:07:34 -- nvmf/common.sh@295 -- # local -ga e810 00:15:04.228 08:07:34 -- nvmf/common.sh@296 -- # x722=() 00:15:04.228 08:07:34 -- nvmf/common.sh@296 -- # local -ga x722 00:15:04.228 08:07:34 -- nvmf/common.sh@297 -- # mlx=() 00:15:04.228 08:07:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:04.228 08:07:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:04.228 08:07:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:04.228 08:07:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:04.228 08:07:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:04.228 08:07:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:04.228 08:07:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:04.228 08:07:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:04.228 08:07:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:04.228 08:07:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:04.228 08:07:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:04.228 08:07:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:04.228 08:07:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:04.228 08:07:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:04.228 08:07:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:04.228 08:07:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:04.228 08:07:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:04.228 08:07:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:04.228 08:07:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:04.228 08:07:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:04.228 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:04.228 08:07:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:04.228 08:07:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:04.228 08:07:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.228 08:07:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.228 08:07:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:04.228 08:07:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:04.228 08:07:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:04.228 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:04.228 08:07:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:04.228 08:07:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:04.228 08:07:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.228 08:07:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.228 08:07:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:04.228 08:07:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:04.228 08:07:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:04.228 08:07:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:04.228 08:07:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:04.228 08:07:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.228 08:07:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:04.228 08:07:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.228 08:07:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:04.228 Found net devices under 0000:31:00.0: cvl_0_0 00:15:04.228 08:07:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.228 08:07:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:04.228 08:07:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.228 08:07:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:04.228 08:07:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.228 08:07:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:04.228 Found net devices under 0000:31:00.1: cvl_0_1 00:15:04.228 08:07:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.228 08:07:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:04.228 08:07:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:04.228 08:07:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:04.228 08:07:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:04.228 08:07:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:04.228 08:07:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:04.228 08:07:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:04.228 08:07:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:04.228 08:07:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:04.228 08:07:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:04.228 08:07:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:04.228 08:07:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:04.228 08:07:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:04.228 08:07:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:04.228 08:07:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:04.228 08:07:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:04.228 08:07:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:04.228 08:07:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:04.228 08:07:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:04.228 08:07:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:04.228 08:07:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:04.228 08:07:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:04.491 08:07:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:04.491 08:07:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:04.491 08:07:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:04.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:15:04.491 00:15:04.491 --- 10.0.0.2 ping statistics --- 00:15:04.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.491 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:15:04.491 08:07:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:04.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:15:04.491 00:15:04.491 --- 10.0.0.1 ping statistics --- 00:15:04.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.491 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:15:04.491 08:07:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.491 08:07:34 -- nvmf/common.sh@410 -- # return 0 00:15:04.491 08:07:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:04.491 08:07:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.491 08:07:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:04.491 08:07:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:04.491 08:07:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.491 08:07:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:04.491 08:07:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:04.491 08:07:34 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:04.491 08:07:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:04.491 08:07:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:04.491 08:07:34 -- common/autotest_common.sh@10 -- # set +x 00:15:04.492 08:07:34 -- nvmf/common.sh@469 -- # nvmfpid=988832 00:15:04.492 08:07:34 -- nvmf/common.sh@470 -- # waitforlisten 988832 00:15:04.492 08:07:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:04.492 08:07:34 -- common/autotest_common.sh@819 -- # '[' -z 988832 ']' 00:15:04.492 08:07:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.492 08:07:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:04.492 08:07:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.492 08:07:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:04.492 08:07:34 -- common/autotest_common.sh@10 -- # set +x 00:15:04.492 [2024-06-11 08:07:35.048340] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:04.492 [2024-06-11 08:07:35.048391] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.492 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.492 [2024-06-11 08:07:35.116331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:04.753 [2024-06-11 08:07:35.186590] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:04.753 [2024-06-11 08:07:35.186713] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.753 [2024-06-11 08:07:35.186721] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.753 [2024-06-11 08:07:35.186728] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.753 [2024-06-11 08:07:35.186818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.753 [2024-06-11 08:07:35.186820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.323 08:07:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:05.323 08:07:35 -- common/autotest_common.sh@852 -- # return 0 00:15:05.323 08:07:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:05.323 08:07:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:05.323 08:07:35 -- common/autotest_common.sh@10 -- # set +x 00:15:05.323 08:07:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.323 08:07:35 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:05.323 08:07:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.323 08:07:35 -- common/autotest_common.sh@10 -- # set +x 00:15:05.323 [2024-06-11 08:07:35.846482] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.323 08:07:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.323 08:07:35 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:05.323 08:07:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.323 08:07:35 -- common/autotest_common.sh@10 -- # set +x 00:15:05.323 08:07:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.323 08:07:35 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.323 08:07:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.323 08:07:35 -- common/autotest_common.sh@10 -- # set +x 00:15:05.323 [2024-06-11 08:07:35.862602] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.323 08:07:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.323 08:07:35 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:05.323 08:07:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.323 08:07:35 -- common/autotest_common.sh@10 -- # set +x 00:15:05.324 NULL1 00:15:05.324 08:07:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.324 08:07:35 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:05.324 08:07:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.324 08:07:35 -- common/autotest_common.sh@10 -- # set +x 00:15:05.324 Delay0 00:15:05.324 08:07:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.324 08:07:35 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:05.324 08:07:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.324 08:07:35 -- common/autotest_common.sh@10 -- # set +x 00:15:05.324 08:07:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.324 08:07:35 -- target/delete_subsystem.sh@28 -- # perf_pid=989004 00:15:05.324 08:07:35 -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:05.324 08:07:35 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:05.324 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.324 [2024-06-11 08:07:35.947280] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:07.869 08:07:37 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:07.869 08:07:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:07.869 08:07:37 -- common/autotest_common.sh@10 -- # set +x 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Write completed with error (sct=0, sc=8) 00:15:07.869 starting I/O failed: -6 00:15:07.869 Write completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 starting I/O failed: -6 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Write completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 starting I/O failed: -6 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Write completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Write completed with error (sct=0, sc=8) 00:15:07.869 starting I/O failed: -6 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Write completed with error (sct=0, sc=8) 00:15:07.869 starting I/O failed: -6 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Write completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 starting I/O failed: -6 00:15:07.869 Write completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 starting I/O failed: -6 00:15:07.869 Write completed with error (sct=0, sc=8) 00:15:07.869 Write completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 starting I/O failed: -6 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 starting I/O failed: -6 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Write completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Write completed with error (sct=0, sc=8) 00:15:07.869 starting I/O failed: -6 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.869 Read completed with error (sct=0, sc=8) 00:15:07.870 starting I/O failed: -6 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 [2024-06-11 08:07:38.191034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbd040 is same with the state(5) to be set 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 starting I/O failed: -6 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 starting I/O failed: -6 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 starting I/O failed: -6 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 starting I/O failed: -6 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 starting I/O failed: -6 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 starting I/O failed: -6 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 starting I/O failed: -6 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 starting I/O failed: -6 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 starting I/O failed: -6 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 starting I/O failed: -6 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 starting I/O failed: -6 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 [2024-06-11 08:07:38.196167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe79400c350 is same with the state(5) to be set 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Write completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:07.870 Read completed with error (sct=0, sc=8) 00:15:08.813 [2024-06-11 08:07:39.169417] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbe5e0 is same with the state(5) to be set 00:15:08.813 Read completed with error (sct=0, sc=8) 00:15:08.813 Read completed with error (sct=0, sc=8) 00:15:08.813 Write completed with error (sct=0, sc=8) 00:15:08.813 Write completed with error (sct=0, sc=8) 00:15:08.813 Read completed with error (sct=0, sc=8) 00:15:08.813 Write completed with error (sct=0, sc=8) 00:15:08.813 Read completed with error (sct=0, sc=8) 00:15:08.813 Read completed with error (sct=0, sc=8) 00:15:08.813 Read completed with error (sct=0, sc=8) 00:15:08.813 Write completed with error (sct=0, sc=8) 00:15:08.813 Read completed with error (sct=0, sc=8) 00:15:08.813 Write completed with error (sct=0, sc=8) 00:15:08.813 Read completed with error (sct=0, sc=8) 00:15:08.813 Read completed with error (sct=0, sc=8) 00:15:08.813 Read completed with error (sct=0, sc=8) 00:15:08.813 Read completed with error (sct=0, sc=8) 00:15:08.813 Write completed with error (sct=0, sc=8) 00:15:08.813 Write completed with error (sct=0, sc=8) 00:15:08.813 Write completed with error (sct=0, sc=8) 00:15:08.813 Read completed with error (sct=0, sc=8) 00:15:08.813 Read completed with error (sct=0, sc=8) 00:15:08.813 Read completed with error (sct=0, sc=8) 00:15:08.813 Write completed with error (sct=0, sc=8) 00:15:08.813 Read completed with error (sct=0, sc=8) 00:15:08.813 Read completed with error (sct=0, sc=8) 00:15:08.813 [2024-06-11 08:07:39.194123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb3910 is same with the state(5) to be set 00:15:08.813 Read completed with error (sct=0, sc=8) 00:15:08.813 Read completed with error (sct=0, sc=8) 00:15:08.813 Read completed with error (sct=0, sc=8) 00:15:08.813 Read completed with error (sct=0, sc=8) 00:15:08.813 Read completed with error (sct=0, sc=8) 00:15:08.813 Write completed with error (sct=0, sc=8) 00:15:08.813 Write completed with error (sct=0, sc=8) 00:15:08.813 Read completed with error (sct=0, sc=8) 00:15:08.813 Write completed with error (sct=0, sc=8) 00:15:08.813 Write completed with error (sct=0, sc=8) 00:15:08.813 Write completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Write completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Write completed with error (sct=0, sc=8) 00:15:08.814 Write completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Write completed with error (sct=0, sc=8) 00:15:08.814 Write completed with error (sct=0, sc=8) 00:15:08.814 Write completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Write completed with error (sct=0, sc=8) 00:15:08.814 [2024-06-11 08:07:39.194569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9d8b0 is same with the state(5) to be set 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Write completed with error (sct=0, sc=8) 00:15:08.814 Write completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Write completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Write completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Write completed with error (sct=0, sc=8) 00:15:08.814 Write completed with error (sct=0, sc=8) 00:15:08.814 Write completed with error (sct=0, sc=8) 00:15:08.814 Write completed with error (sct=0, sc=8) 00:15:08.814 Write completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 [2024-06-11 08:07:39.198316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe79400bf20 is same with the state(5) to be set 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Write completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Write completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Write completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Write completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 Read completed with error (sct=0, sc=8) 00:15:08.814 [2024-06-11 08:07:39.198401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe79400c600 is same with the state(5) to be set 00:15:08.814 [2024-06-11 08:07:39.198907] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbbe5e0 (9): Bad file descriptor 00:15:08.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:08.814 Initializing NVMe Controllers 00:15:08.814 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:08.814 Controller IO queue size 128, less than required. 00:15:08.814 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:08.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:08.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:08.814 Initialization complete. Launching workers. 00:15:08.814 ======================================================== 00:15:08.814 Latency(us) 00:15:08.814 Device Information : IOPS MiB/s Average min max 00:15:08.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.91 0.08 889619.29 242.82 1005532.33 00:15:08.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 168.92 0.08 897895.76 282.32 1009429.95 00:15:08.814 ======================================================== 00:15:08.814 Total : 340.83 0.17 893721.23 242.82 1009429.95 00:15:08.814 00:15:08.814 08:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:08.814 08:07:39 -- target/delete_subsystem.sh@34 -- # delay=0 00:15:08.814 08:07:39 -- target/delete_subsystem.sh@35 -- # kill -0 989004 00:15:08.814 08:07:39 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:09.075 08:07:39 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:09.075 08:07:39 -- target/delete_subsystem.sh@35 -- # kill -0 989004 00:15:09.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (989004) - No such process 00:15:09.075 08:07:39 -- target/delete_subsystem.sh@45 -- # NOT wait 989004 00:15:09.075 08:07:39 -- common/autotest_common.sh@640 -- # local es=0 00:15:09.075 08:07:39 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 989004 00:15:09.075 08:07:39 -- common/autotest_common.sh@628 -- # local arg=wait 00:15:09.075 08:07:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:09.075 08:07:39 -- common/autotest_common.sh@632 -- # type -t wait 00:15:09.075 08:07:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:09.075 08:07:39 -- common/autotest_common.sh@643 -- # wait 989004 00:15:09.075 08:07:39 -- common/autotest_common.sh@643 -- # es=1 00:15:09.075 08:07:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:09.075 08:07:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:09.075 08:07:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:09.075 08:07:39 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:09.075 08:07:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.075 08:07:39 -- common/autotest_common.sh@10 -- # set +x 00:15:09.336 08:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.336 08:07:39 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.336 08:07:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.336 08:07:39 -- common/autotest_common.sh@10 -- # set +x 00:15:09.336 [2024-06-11 08:07:39.731356] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.336 08:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.336 08:07:39 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:09.336 08:07:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:09.336 08:07:39 -- common/autotest_common.sh@10 -- # set +x 00:15:09.336 08:07:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:09.336 08:07:39 -- target/delete_subsystem.sh@54 -- # perf_pid=989706 00:15:09.336 08:07:39 -- target/delete_subsystem.sh@56 -- # delay=0 00:15:09.336 08:07:39 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:09.336 08:07:39 -- target/delete_subsystem.sh@57 -- # kill -0 989706 00:15:09.336 08:07:39 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:09.336 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.336 [2024-06-11 08:07:39.778313] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:09.908 08:07:40 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:09.908 08:07:40 -- target/delete_subsystem.sh@57 -- # kill -0 989706 00:15:09.908 08:07:40 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:10.168 08:07:40 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:10.168 08:07:40 -- target/delete_subsystem.sh@57 -- # kill -0 989706 00:15:10.168 08:07:40 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:10.739 08:07:41 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:10.739 08:07:41 -- target/delete_subsystem.sh@57 -- # kill -0 989706 00:15:10.739 08:07:41 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:11.312 08:07:41 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:11.312 08:07:41 -- target/delete_subsystem.sh@57 -- # kill -0 989706 00:15:11.312 08:07:41 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:11.885 08:07:42 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:11.885 08:07:42 -- target/delete_subsystem.sh@57 -- # kill -0 989706 00:15:11.885 08:07:42 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:12.145 08:07:42 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:12.145 08:07:42 -- target/delete_subsystem.sh@57 -- # kill -0 989706 00:15:12.146 08:07:42 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:12.406 Initializing NVMe Controllers 00:15:12.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:12.406 Controller IO queue size 128, less than required. 00:15:12.406 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:12.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:12.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:12.406 Initialization complete. Launching workers. 00:15:12.406 ======================================================== 00:15:12.406 Latency(us) 00:15:12.406 Device Information : IOPS MiB/s Average min max 00:15:12.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002216.45 1000170.17 1008158.96 00:15:12.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002871.53 1000267.65 1008859.71 00:15:12.406 ======================================================== 00:15:12.406 Total : 256.00 0.12 1002543.99 1000170.17 1008859.71 00:15:12.406 00:15:12.705 08:07:43 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:12.705 08:07:43 -- target/delete_subsystem.sh@57 -- # kill -0 989706 00:15:12.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (989706) - No such process 00:15:12.705 08:07:43 -- target/delete_subsystem.sh@67 -- # wait 989706 00:15:12.705 08:07:43 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:12.705 08:07:43 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:12.705 08:07:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:12.705 08:07:43 -- nvmf/common.sh@116 -- # sync 00:15:12.705 08:07:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:12.705 08:07:43 -- nvmf/common.sh@119 -- # set +e 00:15:12.705 08:07:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:12.705 08:07:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:12.705 rmmod nvme_tcp 00:15:12.705 rmmod nvme_fabrics 00:15:12.705 rmmod nvme_keyring 00:15:12.968 08:07:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:12.968 08:07:43 -- nvmf/common.sh@123 -- # set -e 00:15:12.968 08:07:43 -- nvmf/common.sh@124 -- # return 0 00:15:12.968 08:07:43 -- nvmf/common.sh@477 -- # '[' -n 988832 ']' 00:15:12.968 08:07:43 -- nvmf/common.sh@478 -- # killprocess 988832 00:15:12.968 08:07:43 -- common/autotest_common.sh@926 -- # '[' -z 988832 ']' 00:15:12.968 08:07:43 -- common/autotest_common.sh@930 -- # kill -0 988832 00:15:12.968 08:07:43 -- common/autotest_common.sh@931 -- # uname 00:15:12.968 08:07:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:12.968 08:07:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 988832 00:15:12.968 08:07:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:12.968 08:07:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:12.968 08:07:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 988832' 00:15:12.968 killing process with pid 988832 00:15:12.968 08:07:43 -- common/autotest_common.sh@945 -- # kill 988832 00:15:12.968 08:07:43 -- common/autotest_common.sh@950 -- # wait 988832 00:15:12.968 08:07:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:12.968 08:07:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:12.968 08:07:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:12.968 08:07:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:12.968 08:07:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:12.968 08:07:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.968 08:07:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.968 08:07:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.516 08:07:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:15.516 00:15:15.516 real 0m17.843s 00:15:15.516 user 0m30.865s 00:15:15.516 sys 0m6.115s 00:15:15.516 08:07:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:15.516 08:07:45 -- common/autotest_common.sh@10 -- # set +x 00:15:15.516 ************************************ 00:15:15.516 END TEST nvmf_delete_subsystem 00:15:15.516 ************************************ 00:15:15.516 08:07:45 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:15:15.516 08:07:45 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:15.516 08:07:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:15.516 08:07:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:15.516 08:07:45 -- common/autotest_common.sh@10 -- # set +x 00:15:15.516 ************************************ 00:15:15.516 START TEST nvmf_nvme_cli 00:15:15.516 ************************************ 00:15:15.516 08:07:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:15.516 * Looking for test storage... 00:15:15.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:15.516 08:07:45 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:15.516 08:07:45 -- nvmf/common.sh@7 -- # uname -s 00:15:15.516 08:07:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.516 08:07:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.516 08:07:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.516 08:07:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.516 08:07:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.516 08:07:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.516 08:07:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.516 08:07:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.516 08:07:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.516 08:07:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.516 08:07:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:15.516 08:07:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:15.516 08:07:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.516 08:07:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.516 08:07:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:15.516 08:07:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:15.516 08:07:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.516 08:07:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.516 08:07:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.516 08:07:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.516 08:07:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.517 08:07:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.517 08:07:45 -- paths/export.sh@5 -- # export PATH 00:15:15.517 08:07:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.517 08:07:45 -- nvmf/common.sh@46 -- # : 0 00:15:15.517 08:07:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:15.517 08:07:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:15.517 08:07:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:15.517 08:07:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.517 08:07:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.517 08:07:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:15.517 08:07:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:15.517 08:07:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:15.517 08:07:45 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:15.517 08:07:45 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:15.517 08:07:45 -- target/nvme_cli.sh@14 -- # devs=() 00:15:15.517 08:07:45 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:15.517 08:07:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:15.517 08:07:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:15.517 08:07:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:15.517 08:07:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:15.517 08:07:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:15.517 08:07:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.517 08:07:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.517 08:07:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.517 08:07:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:15.517 08:07:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:15.517 08:07:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:15.517 08:07:45 -- common/autotest_common.sh@10 -- # set +x 00:15:22.107 08:07:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:22.107 08:07:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:22.107 08:07:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:22.107 08:07:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:22.107 08:07:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:22.107 08:07:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:22.107 08:07:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:22.107 08:07:52 -- nvmf/common.sh@294 -- # net_devs=() 00:15:22.107 08:07:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:22.107 08:07:52 -- nvmf/common.sh@295 -- # e810=() 00:15:22.107 08:07:52 -- nvmf/common.sh@295 -- # local -ga e810 00:15:22.107 08:07:52 -- nvmf/common.sh@296 -- # x722=() 00:15:22.107 08:07:52 -- nvmf/common.sh@296 -- # local -ga x722 00:15:22.107 08:07:52 -- nvmf/common.sh@297 -- # mlx=() 00:15:22.107 08:07:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:22.107 08:07:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.107 08:07:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.107 08:07:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.107 08:07:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.107 08:07:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.107 08:07:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.107 08:07:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.107 08:07:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.107 08:07:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.107 08:07:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.107 08:07:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.107 08:07:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:22.107 08:07:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:22.107 08:07:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:22.107 08:07:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:22.107 08:07:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:22.107 08:07:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:22.107 08:07:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:22.107 08:07:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:22.107 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:22.107 08:07:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:22.107 08:07:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:22.107 08:07:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.107 08:07:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.107 08:07:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:22.107 08:07:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:22.107 08:07:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:22.107 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:22.107 08:07:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:22.107 08:07:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:22.107 08:07:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.107 08:07:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.107 08:07:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:22.107 08:07:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:22.107 08:07:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:22.107 08:07:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:22.107 08:07:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:22.107 08:07:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.107 08:07:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:22.107 08:07:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.107 08:07:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:22.107 Found net devices under 0000:31:00.0: cvl_0_0 00:15:22.107 08:07:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.107 08:07:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:22.107 08:07:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.107 08:07:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:22.107 08:07:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.107 08:07:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:22.107 Found net devices under 0000:31:00.1: cvl_0_1 00:15:22.107 08:07:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.107 08:07:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:22.107 08:07:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:22.107 08:07:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:22.107 08:07:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:22.107 08:07:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:22.107 08:07:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.107 08:07:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.107 08:07:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:22.107 08:07:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:22.107 08:07:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:22.107 08:07:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:22.107 08:07:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:22.107 08:07:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:22.107 08:07:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.107 08:07:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:22.107 08:07:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:22.107 08:07:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:22.107 08:07:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:22.368 08:07:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:22.368 08:07:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:22.368 08:07:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:22.368 08:07:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:22.368 08:07:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:22.368 08:07:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:22.368 08:07:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:22.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:22.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:15:22.368 00:15:22.368 --- 10.0.0.2 ping statistics --- 00:15:22.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.368 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:15:22.368 08:07:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:22.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:15:22.368 00:15:22.368 --- 10.0.0.1 ping statistics --- 00:15:22.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.368 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:15:22.368 08:07:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.368 08:07:52 -- nvmf/common.sh@410 -- # return 0 00:15:22.368 08:07:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:22.368 08:07:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.368 08:07:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:22.368 08:07:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:22.368 08:07:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.368 08:07:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:22.368 08:07:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:22.368 08:07:52 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:22.368 08:07:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:22.368 08:07:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:22.368 08:07:52 -- common/autotest_common.sh@10 -- # set +x 00:15:22.368 08:07:52 -- nvmf/common.sh@469 -- # nvmfpid=994768 00:15:22.368 08:07:52 -- nvmf/common.sh@470 -- # waitforlisten 994768 00:15:22.368 08:07:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:22.368 08:07:52 -- common/autotest_common.sh@819 -- # '[' -z 994768 ']' 00:15:22.368 08:07:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.368 08:07:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:22.368 08:07:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.368 08:07:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:22.368 08:07:52 -- common/autotest_common.sh@10 -- # set +x 00:15:22.368 [2024-06-11 08:07:53.002022] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:22.368 [2024-06-11 08:07:53.002071] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.629 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.629 [2024-06-11 08:07:53.067910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:22.629 [2024-06-11 08:07:53.131717] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:22.629 [2024-06-11 08:07:53.131853] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.629 [2024-06-11 08:07:53.131863] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.629 [2024-06-11 08:07:53.131871] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.629 [2024-06-11 08:07:53.132049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.629 [2024-06-11 08:07:53.132167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.629 [2024-06-11 08:07:53.132324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.629 [2024-06-11 08:07:53.132324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:23.201 08:07:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:23.201 08:07:53 -- common/autotest_common.sh@852 -- # return 0 00:15:23.201 08:07:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:23.201 08:07:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:23.201 08:07:53 -- common/autotest_common.sh@10 -- # set +x 00:15:23.201 08:07:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.201 08:07:53 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:23.201 08:07:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.201 08:07:53 -- common/autotest_common.sh@10 -- # set +x 00:15:23.201 [2024-06-11 08:07:53.800585] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.201 08:07:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.201 08:07:53 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:23.201 08:07:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.201 08:07:53 -- common/autotest_common.sh@10 -- # set +x 00:15:23.201 Malloc0 00:15:23.201 08:07:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.201 08:07:53 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:23.201 08:07:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.202 08:07:53 -- common/autotest_common.sh@10 -- # set +x 00:15:23.202 Malloc1 00:15:23.202 08:07:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.202 08:07:53 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:23.202 08:07:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.462 08:07:53 -- common/autotest_common.sh@10 -- # set +x 00:15:23.462 08:07:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.462 08:07:53 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:23.462 08:07:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.462 08:07:53 -- common/autotest_common.sh@10 -- # set +x 00:15:23.462 08:07:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.462 08:07:53 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:23.462 08:07:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.462 08:07:53 -- common/autotest_common.sh@10 -- # set +x 00:15:23.462 08:07:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.462 08:07:53 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.462 08:07:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.462 08:07:53 -- common/autotest_common.sh@10 -- # set +x 00:15:23.462 [2024-06-11 08:07:53.890637] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.462 08:07:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.462 08:07:53 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:23.462 08:07:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.462 08:07:53 -- common/autotest_common.sh@10 -- # set +x 00:15:23.462 08:07:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.462 08:07:53 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:15:23.462 00:15:23.462 Discovery Log Number of Records 2, Generation counter 2 00:15:23.462 =====Discovery Log Entry 0====== 00:15:23.462 trtype: tcp 00:15:23.462 adrfam: ipv4 00:15:23.462 subtype: current discovery subsystem 00:15:23.462 treq: not required 00:15:23.462 portid: 0 00:15:23.462 trsvcid: 4420 00:15:23.462 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:23.462 traddr: 10.0.0.2 00:15:23.462 eflags: explicit discovery connections, duplicate discovery information 00:15:23.462 sectype: none 00:15:23.462 =====Discovery Log Entry 1====== 00:15:23.462 trtype: tcp 00:15:23.462 adrfam: ipv4 00:15:23.462 subtype: nvme subsystem 00:15:23.462 treq: not required 00:15:23.462 portid: 0 00:15:23.462 trsvcid: 4420 00:15:23.462 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:23.462 traddr: 10.0.0.2 00:15:23.462 eflags: none 00:15:23.462 sectype: none 00:15:23.462 08:07:54 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:23.462 08:07:54 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:23.462 08:07:54 -- nvmf/common.sh@510 -- # local dev _ 00:15:23.462 08:07:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:23.462 08:07:54 -- nvmf/common.sh@509 -- # nvme list 00:15:23.462 08:07:54 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:23.462 08:07:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:23.462 08:07:54 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:23.462 08:07:54 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:23.462 08:07:54 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:23.462 08:07:54 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:25.374 08:07:55 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:25.374 08:07:55 -- common/autotest_common.sh@1177 -- # local i=0 00:15:25.374 08:07:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:25.374 08:07:55 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:15:25.374 08:07:55 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:15:25.374 08:07:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:27.289 08:07:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:27.289 08:07:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:27.289 08:07:57 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:27.289 08:07:57 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:15:27.289 08:07:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:27.289 08:07:57 -- common/autotest_common.sh@1187 -- # return 0 00:15:27.289 08:07:57 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:27.289 08:07:57 -- nvmf/common.sh@510 -- # local dev _ 00:15:27.289 08:07:57 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:27.290 08:07:57 -- nvmf/common.sh@509 -- # nvme list 00:15:27.290 08:07:57 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:27.290 08:07:57 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:27.290 08:07:57 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:27.290 08:07:57 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:27.290 08:07:57 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:27.290 08:07:57 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:27.290 08:07:57 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:27.290 08:07:57 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:27.290 08:07:57 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:27.290 08:07:57 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:27.290 08:07:57 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:27.290 /dev/nvme0n1 ]] 00:15:27.290 08:07:57 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:27.290 08:07:57 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:27.290 08:07:57 -- nvmf/common.sh@510 -- # local dev _ 00:15:27.290 08:07:57 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:27.290 08:07:57 -- nvmf/common.sh@509 -- # nvme list 00:15:27.290 08:07:57 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:27.290 08:07:57 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:27.290 08:07:57 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:27.290 08:07:57 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:27.290 08:07:57 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:27.290 08:07:57 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:27.290 08:07:57 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:27.290 08:07:57 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:27.290 08:07:57 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:27.290 08:07:57 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:27.290 08:07:57 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:27.290 08:07:57 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:27.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.290 08:07:57 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:27.290 08:07:57 -- common/autotest_common.sh@1198 -- # local i=0 00:15:27.290 08:07:57 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:27.290 08:07:57 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:27.290 08:07:57 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:27.290 08:07:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:27.290 08:07:57 -- common/autotest_common.sh@1210 -- # return 0 00:15:27.290 08:07:57 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:27.290 08:07:57 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.290 08:07:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.290 08:07:57 -- common/autotest_common.sh@10 -- # set +x 00:15:27.290 08:07:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.290 08:07:57 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:27.290 08:07:57 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:27.290 08:07:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:27.290 08:07:57 -- nvmf/common.sh@116 -- # sync 00:15:27.290 08:07:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:27.290 08:07:57 -- nvmf/common.sh@119 -- # set +e 00:15:27.290 08:07:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:27.290 08:07:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:27.290 rmmod nvme_tcp 00:15:27.290 rmmod nvme_fabrics 00:15:27.290 rmmod nvme_keyring 00:15:27.290 08:07:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:27.290 08:07:57 -- nvmf/common.sh@123 -- # set -e 00:15:27.290 08:07:57 -- nvmf/common.sh@124 -- # return 0 00:15:27.290 08:07:57 -- nvmf/common.sh@477 -- # '[' -n 994768 ']' 00:15:27.290 08:07:57 -- nvmf/common.sh@478 -- # killprocess 994768 00:15:27.290 08:07:57 -- common/autotest_common.sh@926 -- # '[' -z 994768 ']' 00:15:27.290 08:07:57 -- common/autotest_common.sh@930 -- # kill -0 994768 00:15:27.290 08:07:57 -- common/autotest_common.sh@931 -- # uname 00:15:27.290 08:07:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:27.290 08:07:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 994768 00:15:27.290 08:07:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:27.290 08:07:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:27.290 08:07:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 994768' 00:15:27.290 killing process with pid 994768 00:15:27.290 08:07:57 -- common/autotest_common.sh@945 -- # kill 994768 00:15:27.290 08:07:57 -- common/autotest_common.sh@950 -- # wait 994768 00:15:27.551 08:07:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:27.551 08:07:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:27.551 08:07:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:27.551 08:07:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:27.551 08:07:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:27.551 08:07:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.551 08:07:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.551 08:07:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.464 08:08:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:29.464 00:15:29.464 real 0m14.432s 00:15:29.464 user 0m21.573s 00:15:29.464 sys 0m5.771s 00:15:29.464 08:08:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:29.464 08:08:00 -- common/autotest_common.sh@10 -- # set +x 00:15:29.464 ************************************ 00:15:29.464 END TEST nvmf_nvme_cli 00:15:29.464 ************************************ 00:15:29.725 08:08:00 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:15:29.725 08:08:00 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:29.725 08:08:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:29.725 08:08:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:29.725 08:08:00 -- common/autotest_common.sh@10 -- # set +x 00:15:29.725 ************************************ 00:15:29.725 START TEST nvmf_host_management 00:15:29.725 ************************************ 00:15:29.725 08:08:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:29.725 * Looking for test storage... 00:15:29.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:29.725 08:08:00 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:29.725 08:08:00 -- nvmf/common.sh@7 -- # uname -s 00:15:29.725 08:08:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.725 08:08:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.725 08:08:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.725 08:08:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.725 08:08:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.725 08:08:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.725 08:08:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.725 08:08:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.725 08:08:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.725 08:08:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.725 08:08:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:29.725 08:08:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:29.725 08:08:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.726 08:08:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.726 08:08:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:29.726 08:08:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:29.726 08:08:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.726 08:08:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.726 08:08:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.726 08:08:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.726 08:08:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.726 08:08:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.726 08:08:00 -- paths/export.sh@5 -- # export PATH 00:15:29.726 08:08:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.726 08:08:00 -- nvmf/common.sh@46 -- # : 0 00:15:29.726 08:08:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:29.726 08:08:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:29.726 08:08:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:29.726 08:08:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.726 08:08:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.726 08:08:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:29.726 08:08:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:29.726 08:08:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:29.726 08:08:00 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:29.726 08:08:00 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:29.726 08:08:00 -- target/host_management.sh@104 -- # nvmftestinit 00:15:29.726 08:08:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:29.726 08:08:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.726 08:08:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:29.726 08:08:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:29.726 08:08:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:29.726 08:08:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.726 08:08:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.726 08:08:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.726 08:08:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:29.726 08:08:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:29.726 08:08:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:29.726 08:08:00 -- common/autotest_common.sh@10 -- # set +x 00:15:37.872 08:08:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:37.872 08:08:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:37.872 08:08:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:37.872 08:08:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:37.872 08:08:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:37.872 08:08:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:37.872 08:08:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:37.872 08:08:07 -- nvmf/common.sh@294 -- # net_devs=() 00:15:37.872 08:08:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:37.872 08:08:07 -- nvmf/common.sh@295 -- # e810=() 00:15:37.872 08:08:07 -- nvmf/common.sh@295 -- # local -ga e810 00:15:37.872 08:08:07 -- nvmf/common.sh@296 -- # x722=() 00:15:37.872 08:08:07 -- nvmf/common.sh@296 -- # local -ga x722 00:15:37.872 08:08:07 -- nvmf/common.sh@297 -- # mlx=() 00:15:37.872 08:08:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:37.872 08:08:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:37.872 08:08:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:37.872 08:08:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:37.872 08:08:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:37.872 08:08:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:37.872 08:08:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:37.872 08:08:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:37.872 08:08:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:37.872 08:08:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:37.872 08:08:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:37.872 08:08:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:37.872 08:08:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:37.872 08:08:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:37.872 08:08:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:37.872 08:08:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:37.872 08:08:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:37.872 08:08:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:37.872 08:08:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:37.872 08:08:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:37.872 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:37.872 08:08:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:37.872 08:08:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:37.872 08:08:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.872 08:08:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.872 08:08:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:37.872 08:08:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:37.872 08:08:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:37.872 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:37.872 08:08:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:37.872 08:08:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:37.872 08:08:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.872 08:08:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.872 08:08:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:37.872 08:08:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:37.872 08:08:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:37.872 08:08:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:37.872 08:08:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:37.872 08:08:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.872 08:08:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:37.872 08:08:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.872 08:08:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:37.872 Found net devices under 0000:31:00.0: cvl_0_0 00:15:37.872 08:08:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.872 08:08:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:37.872 08:08:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.872 08:08:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:37.872 08:08:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.872 08:08:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:37.872 Found net devices under 0000:31:00.1: cvl_0_1 00:15:37.872 08:08:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.872 08:08:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:37.872 08:08:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:37.872 08:08:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:37.873 08:08:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:37.873 08:08:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:37.873 08:08:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:37.873 08:08:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:37.873 08:08:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:37.873 08:08:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:37.873 08:08:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:37.873 08:08:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:37.873 08:08:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:37.873 08:08:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:37.873 08:08:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:37.873 08:08:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:37.873 08:08:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:37.873 08:08:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:37.873 08:08:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:37.873 08:08:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:37.873 08:08:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:37.873 08:08:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:37.873 08:08:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:37.873 08:08:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:37.873 08:08:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:37.873 08:08:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:37.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:37.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:15:37.873 00:15:37.873 --- 10.0.0.2 ping statistics --- 00:15:37.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.873 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:15:37.873 08:08:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:37.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:37.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:15:37.873 00:15:37.873 --- 10.0.0.1 ping statistics --- 00:15:37.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.873 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:15:37.873 08:08:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:37.873 08:08:07 -- nvmf/common.sh@410 -- # return 0 00:15:37.873 08:08:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:37.873 08:08:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:37.873 08:08:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:37.873 08:08:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:37.873 08:08:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:37.873 08:08:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:37.873 08:08:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:37.873 08:08:07 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:15:37.873 08:08:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:37.873 08:08:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:37.873 08:08:07 -- common/autotest_common.sh@10 -- # set +x 00:15:37.873 ************************************ 00:15:37.873 START TEST nvmf_host_management 00:15:37.873 ************************************ 00:15:37.873 08:08:07 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:15:37.873 08:08:07 -- target/host_management.sh@69 -- # starttarget 00:15:37.873 08:08:07 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:37.873 08:08:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:37.873 08:08:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:37.873 08:08:07 -- common/autotest_common.sh@10 -- # set +x 00:15:37.873 08:08:07 -- nvmf/common.sh@469 -- # nvmfpid=999938 00:15:37.873 08:08:07 -- nvmf/common.sh@470 -- # waitforlisten 999938 00:15:37.873 08:08:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:37.873 08:08:07 -- common/autotest_common.sh@819 -- # '[' -z 999938 ']' 00:15:37.873 08:08:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.873 08:08:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:37.873 08:08:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.873 08:08:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:37.873 08:08:07 -- common/autotest_common.sh@10 -- # set +x 00:15:37.873 [2024-06-11 08:08:07.591008] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:37.873 [2024-06-11 08:08:07.591066] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.873 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.873 [2024-06-11 08:08:07.678018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:37.873 [2024-06-11 08:08:07.771546] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:37.873 [2024-06-11 08:08:07.771699] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.873 [2024-06-11 08:08:07.771711] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.873 [2024-06-11 08:08:07.771721] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.873 [2024-06-11 08:08:07.771869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.873 [2024-06-11 08:08:07.772005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:37.873 [2024-06-11 08:08:07.772174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.873 [2024-06-11 08:08:07.772174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:37.873 08:08:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:37.873 08:08:08 -- common/autotest_common.sh@852 -- # return 0 00:15:37.873 08:08:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:37.873 08:08:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:37.873 08:08:08 -- common/autotest_common.sh@10 -- # set +x 00:15:37.873 08:08:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.873 08:08:08 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:37.873 08:08:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.873 08:08:08 -- common/autotest_common.sh@10 -- # set +x 00:15:37.873 [2024-06-11 08:08:08.410449] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.873 08:08:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.873 08:08:08 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:37.873 08:08:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:37.873 08:08:08 -- common/autotest_common.sh@10 -- # set +x 00:15:37.873 08:08:08 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:37.873 08:08:08 -- target/host_management.sh@23 -- # cat 00:15:37.873 08:08:08 -- target/host_management.sh@30 -- # rpc_cmd 00:15:37.873 08:08:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:37.873 08:08:08 -- common/autotest_common.sh@10 -- # set +x 00:15:37.873 Malloc0 00:15:37.873 [2024-06-11 08:08:08.469702] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.873 08:08:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:37.873 08:08:08 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:37.873 08:08:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:37.873 08:08:08 -- common/autotest_common.sh@10 -- # set +x 00:15:38.135 08:08:08 -- target/host_management.sh@73 -- # perfpid=1000289 00:15:38.135 08:08:08 -- target/host_management.sh@74 -- # waitforlisten 1000289 /var/tmp/bdevperf.sock 00:15:38.135 08:08:08 -- common/autotest_common.sh@819 -- # '[' -z 1000289 ']' 00:15:38.135 08:08:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:38.135 08:08:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:38.135 08:08:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:38.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:38.135 08:08:08 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:38.135 08:08:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:38.135 08:08:08 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:38.135 08:08:08 -- common/autotest_common.sh@10 -- # set +x 00:15:38.135 08:08:08 -- nvmf/common.sh@520 -- # config=() 00:15:38.135 08:08:08 -- nvmf/common.sh@520 -- # local subsystem config 00:15:38.135 08:08:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:38.135 08:08:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:38.135 { 00:15:38.135 "params": { 00:15:38.135 "name": "Nvme$subsystem", 00:15:38.135 "trtype": "$TEST_TRANSPORT", 00:15:38.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:38.135 "adrfam": "ipv4", 00:15:38.135 "trsvcid": "$NVMF_PORT", 00:15:38.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:38.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:38.135 "hdgst": ${hdgst:-false}, 00:15:38.135 "ddgst": ${ddgst:-false} 00:15:38.135 }, 00:15:38.135 "method": "bdev_nvme_attach_controller" 00:15:38.135 } 00:15:38.135 EOF 00:15:38.135 )") 00:15:38.135 08:08:08 -- nvmf/common.sh@542 -- # cat 00:15:38.135 08:08:08 -- nvmf/common.sh@544 -- # jq . 00:15:38.135 08:08:08 -- nvmf/common.sh@545 -- # IFS=, 00:15:38.135 08:08:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:38.135 "params": { 00:15:38.135 "name": "Nvme0", 00:15:38.135 "trtype": "tcp", 00:15:38.135 "traddr": "10.0.0.2", 00:15:38.135 "adrfam": "ipv4", 00:15:38.135 "trsvcid": "4420", 00:15:38.135 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:38.135 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:38.135 "hdgst": false, 00:15:38.135 "ddgst": false 00:15:38.135 }, 00:15:38.135 "method": "bdev_nvme_attach_controller" 00:15:38.135 }' 00:15:38.135 [2024-06-11 08:08:08.565415] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:38.135 [2024-06-11 08:08:08.565471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000289 ] 00:15:38.135 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.135 [2024-06-11 08:08:08.624718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.135 [2024-06-11 08:08:08.687407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.395 Running I/O for 10 seconds... 00:15:38.968 08:08:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:38.968 08:08:09 -- common/autotest_common.sh@852 -- # return 0 00:15:38.968 08:08:09 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:38.968 08:08:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:38.968 08:08:09 -- common/autotest_common.sh@10 -- # set +x 00:15:38.968 08:08:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:38.968 08:08:09 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:38.968 08:08:09 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:38.968 08:08:09 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:38.968 08:08:09 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:38.968 08:08:09 -- target/host_management.sh@52 -- # local ret=1 00:15:38.968 08:08:09 -- target/host_management.sh@53 -- # local i 00:15:38.968 08:08:09 -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:38.968 08:08:09 -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:38.968 08:08:09 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:38.969 08:08:09 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:38.969 08:08:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:38.969 08:08:09 -- common/autotest_common.sh@10 -- # set +x 00:15:38.969 08:08:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:38.969 08:08:09 -- target/host_management.sh@55 -- # read_io_count=1541 00:15:38.969 08:08:09 -- target/host_management.sh@58 -- # '[' 1541 -ge 100 ']' 00:15:38.969 08:08:09 -- target/host_management.sh@59 -- # ret=0 00:15:38.969 08:08:09 -- target/host_management.sh@60 -- # break 00:15:38.969 08:08:09 -- target/host_management.sh@64 -- # return 0 00:15:38.969 08:08:09 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:38.969 08:08:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:38.969 08:08:09 -- common/autotest_common.sh@10 -- # set +x 00:15:38.969 [2024-06-11 08:08:09.396764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.396993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.397000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1003cb0 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.397014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.969 [2024-06-11 08:08:09.397050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.969 [2024-06-11 08:08:09.397062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.969 [2024-06-11 08:08:09.397070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.969 [2024-06-11 08:08:09.397078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.969 [2024-06-11 08:08:09.397085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.969 [2024-06-11 08:08:09.397093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.969 [2024-06-11 08:08:09.397100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.969 [2024-06-11 08:08:09.397108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2384450 is same with the state(5) to be set 00:15:38.969 [2024-06-11 08:08:09.398599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.969 [2024-06-11 08:08:09.398620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.969 [2024-06-11 08:08:09.398639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.969 [2024-06-11 08:08:09.398647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.969 [2024-06-11 08:08:09.398656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.969 [2024-06-11 08:08:09.398663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.969 [2024-06-11 08:08:09.398673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.969 [2024-06-11 08:08:09.398680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.969 [2024-06-11 08:08:09.398689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.969 [2024-06-11 08:08:09.398696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.969 [2024-06-11 08:08:09.398705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.969 [2024-06-11 08:08:09.398712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.969 [2024-06-11 08:08:09.398722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.969 [2024-06-11 08:08:09.398729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.969 [2024-06-11 08:08:09.398738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.969 [2024-06-11 08:08:09.398745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.969 [2024-06-11 08:08:09.398754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.969 [2024-06-11 08:08:09.398761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.969 [2024-06-11 08:08:09.398771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.969 [2024-06-11 08:08:09.398778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.969 [2024-06-11 08:08:09.398787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.969 [2024-06-11 08:08:09.398794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.969 [2024-06-11 08:08:09.398803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.969 [2024-06-11 08:08:09.398810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.969 [2024-06-11 08:08:09.398819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.969 [2024-06-11 08:08:09.398826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.969 [2024-06-11 08:08:09.398837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.969 [2024-06-11 08:08:09.398846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.969 [2024-06-11 08:08:09.398856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.969 [2024-06-11 08:08:09.398862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.969 [2024-06-11 08:08:09.398872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.969 [2024-06-11 08:08:09.398880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.969 [2024-06-11 08:08:09.398889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.969 [2024-06-11 08:08:09.398898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.398908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.398915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.398924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.398931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.398940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.398947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.398956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.398963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.398972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.398979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.398988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.398995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.970 [2024-06-11 08:08:09.399553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.970 [2024-06-11 08:08:09.399561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.971 [2024-06-11 08:08:09.399573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.971 [2024-06-11 08:08:09.399581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.971 [2024-06-11 08:08:09.399590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.971 [2024-06-11 08:08:09.399597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.971 [2024-06-11 08:08:09.399608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.971 [2024-06-11 08:08:09.399616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.971 [2024-06-11 08:08:09.399625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.971 [2024-06-11 08:08:09.399632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.971 [2024-06-11 08:08:09.399642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.971 [2024-06-11 08:08:09.399649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.971 [2024-06-11 08:08:09.399658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.971 [2024-06-11 08:08:09.399665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.971 [2024-06-11 08:08:09.399674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.971 [2024-06-11 08:08:09.399681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.971 [2024-06-11 08:08:09.399691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:38.971 [2024-06-11 08:08:09.399700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.971 [2024-06-11 08:08:09.399754] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2382110 was disconnected and freed. reset controller. 00:15:38.971 [2024-06-11 08:08:09.400939] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:38.971 08:08:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:38.971 08:08:09 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:38.971 08:08:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:38.971 08:08:09 -- common/autotest_common.sh@10 -- # set +x 00:15:38.971 task offset: 88576 on job bdev=Nvme0n1 fails 00:15:38.971 00:15:38.971 Latency(us) 00:15:38.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.971 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:38.971 Job: Nvme0n1 ended in about 0.54 seconds with error 00:15:38.971 Verification LBA range: start 0x0 length 0x400 00:15:38.971 Nvme0n1 : 0.54 3178.36 198.65 119.59 0.00 19070.75 1495.04 25012.91 00:15:38.971 =================================================================================================================== 00:15:38.971 Total : 3178.36 198.65 119.59 0.00 19070.75 1495.04 25012.91 00:15:38.971 [2024-06-11 08:08:09.402905] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:38.971 [2024-06-11 08:08:09.402926] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2384450 (9): Bad file descriptor 00:15:38.971 08:08:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:38.971 08:08:09 -- target/host_management.sh@87 -- # sleep 1 00:15:38.971 [2024-06-11 08:08:09.414461] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:39.914 08:08:10 -- target/host_management.sh@91 -- # kill -9 1000289 00:15:39.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1000289) - No such process 00:15:39.914 08:08:10 -- target/host_management.sh@91 -- # true 00:15:39.914 08:08:10 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:39.914 08:08:10 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:39.914 08:08:10 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:39.914 08:08:10 -- nvmf/common.sh@520 -- # config=() 00:15:39.914 08:08:10 -- nvmf/common.sh@520 -- # local subsystem config 00:15:39.914 08:08:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:39.914 08:08:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:39.914 { 00:15:39.914 "params": { 00:15:39.914 "name": "Nvme$subsystem", 00:15:39.914 "trtype": "$TEST_TRANSPORT", 00:15:39.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.914 "adrfam": "ipv4", 00:15:39.914 "trsvcid": "$NVMF_PORT", 00:15:39.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.914 "hdgst": ${hdgst:-false}, 00:15:39.914 "ddgst": ${ddgst:-false} 00:15:39.914 }, 00:15:39.914 "method": "bdev_nvme_attach_controller" 00:15:39.914 } 00:15:39.914 EOF 00:15:39.914 )") 00:15:39.914 08:08:10 -- nvmf/common.sh@542 -- # cat 00:15:39.914 08:08:10 -- nvmf/common.sh@544 -- # jq . 00:15:39.914 08:08:10 -- nvmf/common.sh@545 -- # IFS=, 00:15:39.914 08:08:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:39.914 "params": { 00:15:39.914 "name": "Nvme0", 00:15:39.914 "trtype": "tcp", 00:15:39.914 "traddr": "10.0.0.2", 00:15:39.914 "adrfam": "ipv4", 00:15:39.914 "trsvcid": "4420", 00:15:39.914 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:39.914 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:39.914 "hdgst": false, 00:15:39.914 "ddgst": false 00:15:39.914 }, 00:15:39.914 "method": "bdev_nvme_attach_controller" 00:15:39.914 }' 00:15:39.914 [2024-06-11 08:08:10.466535] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:39.914 [2024-06-11 08:08:10.466590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000650 ] 00:15:39.914 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.914 [2024-06-11 08:08:10.526212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.175 [2024-06-11 08:08:10.588403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.175 Running I/O for 1 seconds... 00:15:41.559 00:15:41.559 Latency(us) 00:15:41.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.559 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:41.559 Verification LBA range: start 0x0 length 0x400 00:15:41.559 Nvme0n1 : 1.01 3602.80 225.18 0.00 0.00 17489.10 1283.41 20097.71 00:15:41.559 =================================================================================================================== 00:15:41.559 Total : 3602.80 225.18 0.00 0.00 17489.10 1283.41 20097.71 00:15:41.559 08:08:11 -- target/host_management.sh@101 -- # stoptarget 00:15:41.559 08:08:11 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:41.559 08:08:11 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:41.559 08:08:11 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:41.559 08:08:11 -- target/host_management.sh@40 -- # nvmftestfini 00:15:41.559 08:08:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:41.559 08:08:11 -- nvmf/common.sh@116 -- # sync 00:15:41.559 08:08:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:41.559 08:08:11 -- nvmf/common.sh@119 -- # set +e 00:15:41.559 08:08:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:41.559 08:08:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:41.559 rmmod nvme_tcp 00:15:41.559 rmmod nvme_fabrics 00:15:41.559 rmmod nvme_keyring 00:15:41.559 08:08:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:41.559 08:08:11 -- nvmf/common.sh@123 -- # set -e 00:15:41.560 08:08:11 -- nvmf/common.sh@124 -- # return 0 00:15:41.560 08:08:11 -- nvmf/common.sh@477 -- # '[' -n 999938 ']' 00:15:41.560 08:08:11 -- nvmf/common.sh@478 -- # killprocess 999938 00:15:41.560 08:08:11 -- common/autotest_common.sh@926 -- # '[' -z 999938 ']' 00:15:41.560 08:08:11 -- common/autotest_common.sh@930 -- # kill -0 999938 00:15:41.560 08:08:11 -- common/autotest_common.sh@931 -- # uname 00:15:41.560 08:08:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:41.560 08:08:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 999938 00:15:41.560 08:08:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:41.560 08:08:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:41.560 08:08:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 999938' 00:15:41.560 killing process with pid 999938 00:15:41.560 08:08:12 -- common/autotest_common.sh@945 -- # kill 999938 00:15:41.560 08:08:12 -- common/autotest_common.sh@950 -- # wait 999938 00:15:41.560 [2024-06-11 08:08:12.145252] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:41.560 08:08:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:41.560 08:08:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:41.560 08:08:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:41.560 08:08:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:41.560 08:08:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:41.560 08:08:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.560 08:08:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.560 08:08:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.104 08:08:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:44.104 00:15:44.104 real 0m6.705s 00:15:44.104 user 0m19.951s 00:15:44.104 sys 0m1.086s 00:15:44.104 08:08:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:44.104 08:08:14 -- common/autotest_common.sh@10 -- # set +x 00:15:44.104 ************************************ 00:15:44.104 END TEST nvmf_host_management 00:15:44.104 ************************************ 00:15:44.104 08:08:14 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:15:44.104 00:15:44.104 real 0m14.139s 00:15:44.104 user 0m21.991s 00:15:44.104 sys 0m6.415s 00:15:44.104 08:08:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:44.104 08:08:14 -- common/autotest_common.sh@10 -- # set +x 00:15:44.104 ************************************ 00:15:44.104 END TEST nvmf_host_management 00:15:44.104 ************************************ 00:15:44.104 08:08:14 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:44.104 08:08:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:44.104 08:08:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:44.104 08:08:14 -- common/autotest_common.sh@10 -- # set +x 00:15:44.104 ************************************ 00:15:44.104 START TEST nvmf_lvol 00:15:44.104 ************************************ 00:15:44.104 08:08:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:44.104 * Looking for test storage... 00:15:44.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:44.104 08:08:14 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:44.104 08:08:14 -- nvmf/common.sh@7 -- # uname -s 00:15:44.104 08:08:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:44.104 08:08:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:44.104 08:08:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:44.104 08:08:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:44.104 08:08:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:44.104 08:08:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:44.104 08:08:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:44.104 08:08:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:44.104 08:08:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:44.104 08:08:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:44.104 08:08:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:44.104 08:08:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:44.104 08:08:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:44.104 08:08:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:44.104 08:08:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:44.104 08:08:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:44.104 08:08:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:44.104 08:08:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.104 08:08:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.104 08:08:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.104 08:08:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.104 08:08:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.104 08:08:14 -- paths/export.sh@5 -- # export PATH 00:15:44.104 08:08:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.104 08:08:14 -- nvmf/common.sh@46 -- # : 0 00:15:44.104 08:08:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:44.104 08:08:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:44.104 08:08:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:44.104 08:08:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:44.104 08:08:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:44.104 08:08:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:44.104 08:08:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:44.104 08:08:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:44.104 08:08:14 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:44.104 08:08:14 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:44.104 08:08:14 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:44.104 08:08:14 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:44.104 08:08:14 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:44.104 08:08:14 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:44.104 08:08:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:44.104 08:08:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:44.104 08:08:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:44.104 08:08:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:44.104 08:08:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:44.104 08:08:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.104 08:08:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.104 08:08:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.104 08:08:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:44.104 08:08:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:44.104 08:08:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:44.104 08:08:14 -- common/autotest_common.sh@10 -- # set +x 00:15:50.690 08:08:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:50.690 08:08:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:50.690 08:08:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:50.690 08:08:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:50.690 08:08:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:50.690 08:08:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:50.690 08:08:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:50.690 08:08:21 -- nvmf/common.sh@294 -- # net_devs=() 00:15:50.690 08:08:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:50.690 08:08:21 -- nvmf/common.sh@295 -- # e810=() 00:15:50.690 08:08:21 -- nvmf/common.sh@295 -- # local -ga e810 00:15:50.690 08:08:21 -- nvmf/common.sh@296 -- # x722=() 00:15:50.690 08:08:21 -- nvmf/common.sh@296 -- # local -ga x722 00:15:50.690 08:08:21 -- nvmf/common.sh@297 -- # mlx=() 00:15:50.690 08:08:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:50.690 08:08:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:50.690 08:08:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:50.690 08:08:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:50.690 08:08:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:50.690 08:08:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:50.690 08:08:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:50.690 08:08:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:50.690 08:08:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:50.690 08:08:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:50.690 08:08:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:50.690 08:08:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:50.690 08:08:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:50.690 08:08:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:50.690 08:08:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:50.690 08:08:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:50.690 08:08:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:50.690 08:08:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:50.690 08:08:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:50.690 08:08:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:50.690 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:50.690 08:08:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:50.690 08:08:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:50.690 08:08:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.690 08:08:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.690 08:08:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:50.690 08:08:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:50.690 08:08:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:50.690 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:50.690 08:08:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:50.690 08:08:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:50.690 08:08:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.690 08:08:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.690 08:08:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:50.690 08:08:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:50.690 08:08:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:50.690 08:08:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:50.690 08:08:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:50.690 08:08:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.690 08:08:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:50.690 08:08:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.690 08:08:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:50.690 Found net devices under 0000:31:00.0: cvl_0_0 00:15:50.690 08:08:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.690 08:08:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:50.690 08:08:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.690 08:08:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:50.690 08:08:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.690 08:08:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:50.690 Found net devices under 0000:31:00.1: cvl_0_1 00:15:50.690 08:08:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.690 08:08:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:50.690 08:08:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:50.690 08:08:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:50.690 08:08:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:50.690 08:08:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:50.690 08:08:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:50.690 08:08:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:50.690 08:08:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:50.690 08:08:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:50.690 08:08:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:50.690 08:08:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:50.690 08:08:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:50.690 08:08:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:50.690 08:08:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:50.690 08:08:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:50.690 08:08:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:50.690 08:08:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:50.690 08:08:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:50.690 08:08:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:50.690 08:08:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:50.690 08:08:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:50.690 08:08:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:50.952 08:08:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:50.952 08:08:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:50.952 08:08:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:50.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:50.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:15:50.952 00:15:50.952 --- 10.0.0.2 ping statistics --- 00:15:50.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.952 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:15:50.952 08:08:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:50.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:50.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.353 ms 00:15:50.952 00:15:50.952 --- 10.0.0.1 ping statistics --- 00:15:50.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.952 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:15:50.952 08:08:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:50.952 08:08:21 -- nvmf/common.sh@410 -- # return 0 00:15:50.952 08:08:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:50.952 08:08:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:50.952 08:08:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:50.952 08:08:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:50.952 08:08:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:50.952 08:08:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:50.952 08:08:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:50.952 08:08:21 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:50.952 08:08:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:50.952 08:08:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:50.952 08:08:21 -- common/autotest_common.sh@10 -- # set +x 00:15:50.952 08:08:21 -- nvmf/common.sh@469 -- # nvmfpid=1005116 00:15:50.952 08:08:21 -- nvmf/common.sh@470 -- # waitforlisten 1005116 00:15:50.952 08:08:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:50.952 08:08:21 -- common/autotest_common.sh@819 -- # '[' -z 1005116 ']' 00:15:50.952 08:08:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.952 08:08:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:50.952 08:08:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.952 08:08:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:50.952 08:08:21 -- common/autotest_common.sh@10 -- # set +x 00:15:50.952 [2024-06-11 08:08:21.517613] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:50.952 [2024-06-11 08:08:21.517672] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.952 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.952 [2024-06-11 08:08:21.589130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:51.213 [2024-06-11 08:08:21.662154] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:51.213 [2024-06-11 08:08:21.662277] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.213 [2024-06-11 08:08:21.662286] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.213 [2024-06-11 08:08:21.662292] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.213 [2024-06-11 08:08:21.662429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.213 [2024-06-11 08:08:21.662570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.213 [2024-06-11 08:08:21.662699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.785 08:08:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:51.785 08:08:22 -- common/autotest_common.sh@852 -- # return 0 00:15:51.785 08:08:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:51.785 08:08:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:51.785 08:08:22 -- common/autotest_common.sh@10 -- # set +x 00:15:51.785 08:08:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.785 08:08:22 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:52.046 [2024-06-11 08:08:22.467510] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:52.046 08:08:22 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:52.046 08:08:22 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:52.046 08:08:22 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:52.307 08:08:22 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:52.307 08:08:22 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:52.567 08:08:23 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:52.567 08:08:23 -- target/nvmf_lvol.sh@29 -- # lvs=5c43c46f-d18f-4b13-8067-8aad933a85fe 00:15:52.567 08:08:23 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5c43c46f-d18f-4b13-8067-8aad933a85fe lvol 20 00:15:52.828 08:08:23 -- target/nvmf_lvol.sh@32 -- # lvol=dccbb71d-313b-47eb-9cc8-c7698568989b 00:15:52.828 08:08:23 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:53.089 08:08:23 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dccbb71d-313b-47eb-9cc8-c7698568989b 00:15:53.089 08:08:23 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:53.350 [2024-06-11 08:08:23.833755] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.350 08:08:23 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:53.610 08:08:24 -- target/nvmf_lvol.sh@42 -- # perf_pid=1005785 00:15:53.610 08:08:24 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:53.610 08:08:24 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:53.611 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.553 08:08:25 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot dccbb71d-313b-47eb-9cc8-c7698568989b MY_SNAPSHOT 00:15:54.814 08:08:25 -- target/nvmf_lvol.sh@47 -- # snapshot=912ca6a5-a689-4d6c-81b2-a492def38b1c 00:15:54.814 08:08:25 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize dccbb71d-313b-47eb-9cc8-c7698568989b 30 00:15:54.814 08:08:25 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 912ca6a5-a689-4d6c-81b2-a492def38b1c MY_CLONE 00:15:55.075 08:08:25 -- target/nvmf_lvol.sh@49 -- # clone=fcd000f4-00a4-400f-b59e-456333395d99 00:15:55.075 08:08:25 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate fcd000f4-00a4-400f-b59e-456333395d99 00:15:55.336 08:08:25 -- target/nvmf_lvol.sh@53 -- # wait 1005785 00:16:05.339 Initializing NVMe Controllers 00:16:05.339 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:05.339 Controller IO queue size 128, less than required. 00:16:05.339 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:05.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:05.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:05.339 Initialization complete. Launching workers. 00:16:05.339 ======================================================== 00:16:05.339 Latency(us) 00:16:05.339 Device Information : IOPS MiB/s Average min max 00:16:05.339 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12441.94 48.60 10293.76 1442.14 51240.62 00:16:05.339 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17900.74 69.92 7150.95 582.92 49005.01 00:16:05.339 ======================================================== 00:16:05.339 Total : 30342.68 118.53 8439.65 582.92 51240.62 00:16:05.339 00:16:05.339 08:08:34 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:05.339 08:08:34 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dccbb71d-313b-47eb-9cc8-c7698568989b 00:16:05.339 08:08:34 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5c43c46f-d18f-4b13-8067-8aad933a85fe 00:16:05.339 08:08:34 -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:05.339 08:08:34 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:05.339 08:08:34 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:05.339 08:08:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:05.339 08:08:34 -- nvmf/common.sh@116 -- # sync 00:16:05.339 08:08:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:05.339 08:08:34 -- nvmf/common.sh@119 -- # set +e 00:16:05.339 08:08:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:05.339 08:08:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:05.339 rmmod nvme_tcp 00:16:05.339 rmmod nvme_fabrics 00:16:05.339 rmmod nvme_keyring 00:16:05.339 08:08:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:05.339 08:08:34 -- nvmf/common.sh@123 -- # set -e 00:16:05.339 08:08:34 -- nvmf/common.sh@124 -- # return 0 00:16:05.339 08:08:34 -- nvmf/common.sh@477 -- # '[' -n 1005116 ']' 00:16:05.339 08:08:34 -- nvmf/common.sh@478 -- # killprocess 1005116 00:16:05.339 08:08:34 -- common/autotest_common.sh@926 -- # '[' -z 1005116 ']' 00:16:05.339 08:08:34 -- common/autotest_common.sh@930 -- # kill -0 1005116 00:16:05.339 08:08:34 -- common/autotest_common.sh@931 -- # uname 00:16:05.339 08:08:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:05.339 08:08:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1005116 00:16:05.339 08:08:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:05.339 08:08:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:05.339 08:08:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1005116' 00:16:05.339 killing process with pid 1005116 00:16:05.339 08:08:34 -- common/autotest_common.sh@945 -- # kill 1005116 00:16:05.339 08:08:34 -- common/autotest_common.sh@950 -- # wait 1005116 00:16:05.339 08:08:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:05.339 08:08:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:05.339 08:08:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:05.339 08:08:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:05.339 08:08:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:05.339 08:08:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.339 08:08:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.339 08:08:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.724 08:08:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:06.724 00:16:06.724 real 0m22.889s 00:16:06.724 user 1m3.544s 00:16:06.724 sys 0m7.426s 00:16:06.724 08:08:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:06.724 08:08:37 -- common/autotest_common.sh@10 -- # set +x 00:16:06.724 ************************************ 00:16:06.724 END TEST nvmf_lvol 00:16:06.724 ************************************ 00:16:06.724 08:08:37 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:06.724 08:08:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:06.724 08:08:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:06.724 08:08:37 -- common/autotest_common.sh@10 -- # set +x 00:16:06.724 ************************************ 00:16:06.724 START TEST nvmf_lvs_grow 00:16:06.724 ************************************ 00:16:06.724 08:08:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:06.724 * Looking for test storage... 00:16:06.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:06.724 08:08:37 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:06.724 08:08:37 -- nvmf/common.sh@7 -- # uname -s 00:16:06.724 08:08:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.724 08:08:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.724 08:08:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.724 08:08:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.724 08:08:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.724 08:08:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.724 08:08:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.724 08:08:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.724 08:08:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.724 08:08:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.725 08:08:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:06.985 08:08:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:06.985 08:08:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.985 08:08:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.985 08:08:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:06.985 08:08:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:06.985 08:08:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.985 08:08:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.985 08:08:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.985 08:08:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.985 08:08:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.985 08:08:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.985 08:08:37 -- paths/export.sh@5 -- # export PATH 00:16:06.985 08:08:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.985 08:08:37 -- nvmf/common.sh@46 -- # : 0 00:16:06.985 08:08:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:06.985 08:08:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:06.985 08:08:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:06.985 08:08:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.985 08:08:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.985 08:08:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:06.985 08:08:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:06.985 08:08:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:06.985 08:08:37 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:06.985 08:08:37 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:06.985 08:08:37 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:16:06.985 08:08:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:06.985 08:08:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.985 08:08:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:06.985 08:08:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:06.985 08:08:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:06.985 08:08:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.985 08:08:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.985 08:08:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.985 08:08:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:06.985 08:08:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:06.985 08:08:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:06.985 08:08:37 -- common/autotest_common.sh@10 -- # set +x 00:16:15.127 08:08:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:15.127 08:08:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:15.127 08:08:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:15.127 08:08:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:15.127 08:08:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:15.127 08:08:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:15.127 08:08:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:15.127 08:08:44 -- nvmf/common.sh@294 -- # net_devs=() 00:16:15.127 08:08:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:15.127 08:08:44 -- nvmf/common.sh@295 -- # e810=() 00:16:15.127 08:08:44 -- nvmf/common.sh@295 -- # local -ga e810 00:16:15.127 08:08:44 -- nvmf/common.sh@296 -- # x722=() 00:16:15.127 08:08:44 -- nvmf/common.sh@296 -- # local -ga x722 00:16:15.127 08:08:44 -- nvmf/common.sh@297 -- # mlx=() 00:16:15.127 08:08:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:15.127 08:08:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:15.127 08:08:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:15.127 08:08:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:15.127 08:08:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:15.127 08:08:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:15.127 08:08:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:15.127 08:08:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:15.127 08:08:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:15.127 08:08:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:15.127 08:08:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:15.127 08:08:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:15.127 08:08:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:15.127 08:08:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:15.127 08:08:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:15.127 08:08:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:15.127 08:08:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:15.127 08:08:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:15.127 08:08:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:15.127 08:08:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:15.127 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:15.127 08:08:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:15.127 08:08:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:15.127 08:08:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.127 08:08:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.127 08:08:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:15.127 08:08:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:15.127 08:08:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:15.127 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:15.127 08:08:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:15.127 08:08:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:15.127 08:08:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.127 08:08:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.127 08:08:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:15.127 08:08:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:15.127 08:08:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:15.127 08:08:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:15.127 08:08:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:15.127 08:08:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.127 08:08:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:15.127 08:08:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.127 08:08:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:15.127 Found net devices under 0000:31:00.0: cvl_0_0 00:16:15.127 08:08:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.127 08:08:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:15.127 08:08:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.127 08:08:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:15.127 08:08:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.127 08:08:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:15.127 Found net devices under 0000:31:00.1: cvl_0_1 00:16:15.127 08:08:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.127 08:08:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:15.127 08:08:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:15.127 08:08:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:15.127 08:08:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:15.127 08:08:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:15.127 08:08:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.127 08:08:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:15.127 08:08:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:15.127 08:08:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:15.127 08:08:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:15.127 08:08:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:15.127 08:08:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:15.127 08:08:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:15.127 08:08:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.127 08:08:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:15.127 08:08:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:15.127 08:08:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:15.127 08:08:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:15.127 08:08:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:15.127 08:08:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:15.127 08:08:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:15.127 08:08:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:15.127 08:08:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:15.127 08:08:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:15.127 08:08:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:15.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:16:15.127 00:16:15.127 --- 10.0.0.2 ping statistics --- 00:16:15.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.127 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:16:15.127 08:08:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:15.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:16:15.127 00:16:15.127 --- 10.0.0.1 ping statistics --- 00:16:15.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.127 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:16:15.127 08:08:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.127 08:08:44 -- nvmf/common.sh@410 -- # return 0 00:16:15.127 08:08:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:15.127 08:08:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.127 08:08:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:15.127 08:08:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:15.127 08:08:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.127 08:08:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:15.127 08:08:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:15.127 08:08:44 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:16:15.127 08:08:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:15.127 08:08:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:15.127 08:08:44 -- common/autotest_common.sh@10 -- # set +x 00:16:15.127 08:08:44 -- nvmf/common.sh@469 -- # nvmfpid=1012211 00:16:15.127 08:08:44 -- nvmf/common.sh@470 -- # waitforlisten 1012211 00:16:15.127 08:08:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:15.127 08:08:44 -- common/autotest_common.sh@819 -- # '[' -z 1012211 ']' 00:16:15.127 08:08:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.127 08:08:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:15.127 08:08:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.127 08:08:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:15.127 08:08:44 -- common/autotest_common.sh@10 -- # set +x 00:16:15.127 [2024-06-11 08:08:44.662097] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:15.127 [2024-06-11 08:08:44.662145] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.127 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.127 [2024-06-11 08:08:44.727698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.127 [2024-06-11 08:08:44.790886] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:15.127 [2024-06-11 08:08:44.791006] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.128 [2024-06-11 08:08:44.791014] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.128 [2024-06-11 08:08:44.791022] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.128 [2024-06-11 08:08:44.791041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.128 08:08:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:15.128 08:08:45 -- common/autotest_common.sh@852 -- # return 0 00:16:15.128 08:08:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:15.128 08:08:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:15.128 08:08:45 -- common/autotest_common.sh@10 -- # set +x 00:16:15.128 08:08:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.128 08:08:45 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:15.128 [2024-06-11 08:08:45.605929] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.128 08:08:45 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:16:15.128 08:08:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:15.128 08:08:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:15.128 08:08:45 -- common/autotest_common.sh@10 -- # set +x 00:16:15.128 ************************************ 00:16:15.128 START TEST lvs_grow_clean 00:16:15.128 ************************************ 00:16:15.128 08:08:45 -- common/autotest_common.sh@1104 -- # lvs_grow 00:16:15.128 08:08:45 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:15.128 08:08:45 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:15.128 08:08:45 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:15.128 08:08:45 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:15.128 08:08:45 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:15.128 08:08:45 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:15.128 08:08:45 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:15.128 08:08:45 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:15.128 08:08:45 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:15.388 08:08:45 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:15.388 08:08:45 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:15.388 08:08:45 -- target/nvmf_lvs_grow.sh@28 -- # lvs=19396995-c975-4ec0-9645-14da8363655a 00:16:15.388 08:08:45 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19396995-c975-4ec0-9645-14da8363655a 00:16:15.388 08:08:45 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:15.649 08:08:46 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:15.649 08:08:46 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:15.649 08:08:46 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 19396995-c975-4ec0-9645-14da8363655a lvol 150 00:16:15.649 08:08:46 -- target/nvmf_lvs_grow.sh@33 -- # lvol=9d7fc84f-ce98-424e-9928-41cec6ed75b0 00:16:15.649 08:08:46 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:15.649 08:08:46 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:15.909 [2024-06-11 08:08:46.411485] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:15.909 [2024-06-11 08:08:46.411536] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:15.909 true 00:16:15.909 08:08:46 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19396995-c975-4ec0-9645-14da8363655a 00:16:15.909 08:08:46 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:16.169 08:08:46 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:16.169 08:08:46 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:16.169 08:08:46 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9d7fc84f-ce98-424e-9928-41cec6ed75b0 00:16:16.429 08:08:46 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:16.429 [2024-06-11 08:08:46.977261] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.429 08:08:46 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:16.690 08:08:47 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1012614 00:16:16.690 08:08:47 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:16.690 08:08:47 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:16.690 08:08:47 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1012614 /var/tmp/bdevperf.sock 00:16:16.690 08:08:47 -- common/autotest_common.sh@819 -- # '[' -z 1012614 ']' 00:16:16.690 08:08:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:16.690 08:08:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:16.690 08:08:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:16.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:16.690 08:08:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:16.690 08:08:47 -- common/autotest_common.sh@10 -- # set +x 00:16:16.690 [2024-06-11 08:08:47.184079] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:16.690 [2024-06-11 08:08:47.184127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1012614 ] 00:16:16.690 EAL: No free 2048 kB hugepages reported on node 1 00:16:16.690 [2024-06-11 08:08:47.261372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.690 [2024-06-11 08:08:47.323389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.631 08:08:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:17.631 08:08:47 -- common/autotest_common.sh@852 -- # return 0 00:16:17.631 08:08:47 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:17.891 Nvme0n1 00:16:17.891 08:08:48 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:17.891 [ 00:16:17.891 { 00:16:17.891 "name": "Nvme0n1", 00:16:17.891 "aliases": [ 00:16:17.891 "9d7fc84f-ce98-424e-9928-41cec6ed75b0" 00:16:17.891 ], 00:16:17.891 "product_name": "NVMe disk", 00:16:17.891 "block_size": 4096, 00:16:17.891 "num_blocks": 38912, 00:16:17.891 "uuid": "9d7fc84f-ce98-424e-9928-41cec6ed75b0", 00:16:17.891 "assigned_rate_limits": { 00:16:17.891 "rw_ios_per_sec": 0, 00:16:17.891 "rw_mbytes_per_sec": 0, 00:16:17.891 "r_mbytes_per_sec": 0, 00:16:17.891 "w_mbytes_per_sec": 0 00:16:17.891 }, 00:16:17.891 "claimed": false, 00:16:17.891 "zoned": false, 00:16:17.891 "supported_io_types": { 00:16:17.891 "read": true, 00:16:17.891 "write": true, 00:16:17.891 "unmap": true, 00:16:17.891 "write_zeroes": true, 00:16:17.891 "flush": true, 00:16:17.891 "reset": true, 00:16:17.891 "compare": true, 00:16:17.891 "compare_and_write": true, 00:16:17.891 "abort": true, 00:16:17.891 "nvme_admin": true, 00:16:17.891 "nvme_io": true 00:16:17.891 }, 00:16:17.891 "driver_specific": { 00:16:17.891 "nvme": [ 00:16:17.891 { 00:16:17.891 "trid": { 00:16:17.891 "trtype": "TCP", 00:16:17.891 "adrfam": "IPv4", 00:16:17.891 "traddr": "10.0.0.2", 00:16:17.891 "trsvcid": "4420", 00:16:17.891 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:17.891 }, 00:16:17.891 "ctrlr_data": { 00:16:17.891 "cntlid": 1, 00:16:17.891 "vendor_id": "0x8086", 00:16:17.891 "model_number": "SPDK bdev Controller", 00:16:17.891 "serial_number": "SPDK0", 00:16:17.891 "firmware_revision": "24.01.1", 00:16:17.891 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:17.891 "oacs": { 00:16:17.891 "security": 0, 00:16:17.891 "format": 0, 00:16:17.891 "firmware": 0, 00:16:17.891 "ns_manage": 0 00:16:17.891 }, 00:16:17.892 "multi_ctrlr": true, 00:16:17.892 "ana_reporting": false 00:16:17.892 }, 00:16:17.892 "vs": { 00:16:17.892 "nvme_version": "1.3" 00:16:17.892 }, 00:16:17.892 "ns_data": { 00:16:17.892 "id": 1, 00:16:17.892 "can_share": true 00:16:17.892 } 00:16:17.892 } 00:16:17.892 ], 00:16:17.892 "mp_policy": "active_passive" 00:16:17.892 } 00:16:17.892 } 00:16:17.892 ] 00:16:17.892 08:08:48 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1012959 00:16:17.892 08:08:48 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:17.892 08:08:48 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:18.152 Running I/O for 10 seconds... 00:16:19.093 Latency(us) 00:16:19.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:19.093 Nvme0n1 : 1.00 18475.00 72.17 0.00 0.00 0.00 0.00 0.00 00:16:19.093 =================================================================================================================== 00:16:19.093 Total : 18475.00 72.17 0.00 0.00 0.00 0.00 0.00 00:16:19.093 00:16:20.034 08:08:50 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 19396995-c975-4ec0-9645-14da8363655a 00:16:20.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:20.034 Nvme0n1 : 2.00 18520.50 72.35 0.00 0.00 0.00 0.00 0.00 00:16:20.034 =================================================================================================================== 00:16:20.034 Total : 18520.50 72.35 0.00 0.00 0.00 0.00 0.00 00:16:20.034 00:16:20.034 true 00:16:20.034 08:08:50 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19396995-c975-4ec0-9645-14da8363655a 00:16:20.034 08:08:50 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:20.294 08:08:50 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:20.294 08:08:50 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:20.294 08:08:50 -- target/nvmf_lvs_grow.sh@65 -- # wait 1012959 00:16:21.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:21.235 Nvme0n1 : 3.00 18600.33 72.66 0.00 0.00 0.00 0.00 0.00 00:16:21.235 =================================================================================================================== 00:16:21.235 Total : 18600.33 72.66 0.00 0.00 0.00 0.00 0.00 00:16:21.235 00:16:22.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:22.176 Nvme0n1 : 4.00 18649.50 72.85 0.00 0.00 0.00 0.00 0.00 00:16:22.176 =================================================================================================================== 00:16:22.176 Total : 18649.50 72.85 0.00 0.00 0.00 0.00 0.00 00:16:22.176 00:16:23.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:23.148 Nvme0n1 : 5.00 18680.60 72.97 0.00 0.00 0.00 0.00 0.00 00:16:23.148 =================================================================================================================== 00:16:23.148 Total : 18680.60 72.97 0.00 0.00 0.00 0.00 0.00 00:16:23.148 00:16:24.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:24.089 Nvme0n1 : 6.00 18702.17 73.06 0.00 0.00 0.00 0.00 0.00 00:16:24.089 =================================================================================================================== 00:16:24.089 Total : 18702.17 73.06 0.00 0.00 0.00 0.00 0.00 00:16:24.089 00:16:25.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:25.030 Nvme0n1 : 7.00 18725.00 73.14 0.00 0.00 0.00 0.00 0.00 00:16:25.030 =================================================================================================================== 00:16:25.030 Total : 18725.00 73.14 0.00 0.00 0.00 0.00 0.00 00:16:25.030 00:16:25.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:25.974 Nvme0n1 : 8.00 18734.50 73.18 0.00 0.00 0.00 0.00 0.00 00:16:25.974 =================================================================================================================== 00:16:25.974 Total : 18734.50 73.18 0.00 0.00 0.00 0.00 0.00 00:16:25.974 00:16:27.359 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:27.359 Nvme0n1 : 9.00 18750.00 73.24 0.00 0.00 0.00 0.00 0.00 00:16:27.359 =================================================================================================================== 00:16:27.359 Total : 18750.00 73.24 0.00 0.00 0.00 0.00 0.00 00:16:27.359 00:16:28.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:28.302 Nvme0n1 : 10.00 18762.80 73.29 0.00 0.00 0.00 0.00 0.00 00:16:28.302 =================================================================================================================== 00:16:28.302 Total : 18762.80 73.29 0.00 0.00 0.00 0.00 0.00 00:16:28.302 00:16:28.302 00:16:28.302 Latency(us) 00:16:28.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:28.302 Nvme0n1 : 10.00 18759.97 73.28 0.00 0.00 6818.95 2512.21 15619.41 00:16:28.302 =================================================================================================================== 00:16:28.302 Total : 18759.97 73.28 0.00 0.00 6818.95 2512.21 15619.41 00:16:28.302 0 00:16:28.302 08:08:58 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1012614 00:16:28.302 08:08:58 -- common/autotest_common.sh@926 -- # '[' -z 1012614 ']' 00:16:28.302 08:08:58 -- common/autotest_common.sh@930 -- # kill -0 1012614 00:16:28.302 08:08:58 -- common/autotest_common.sh@931 -- # uname 00:16:28.302 08:08:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:28.302 08:08:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1012614 00:16:28.302 08:08:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:28.302 08:08:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:28.302 08:08:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1012614' 00:16:28.302 killing process with pid 1012614 00:16:28.302 08:08:58 -- common/autotest_common.sh@945 -- # kill 1012614 00:16:28.302 Received shutdown signal, test time was about 10.000000 seconds 00:16:28.302 00:16:28.302 Latency(us) 00:16:28.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.302 =================================================================================================================== 00:16:28.302 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:28.302 08:08:58 -- common/autotest_common.sh@950 -- # wait 1012614 00:16:28.302 08:08:58 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:28.563 08:08:58 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:16:28.563 08:08:58 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19396995-c975-4ec0-9645-14da8363655a 00:16:28.563 08:08:59 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:16:28.563 08:08:59 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:16:28.563 08:08:59 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:28.824 [2024-06-11 08:08:59.251982] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:28.824 08:08:59 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19396995-c975-4ec0-9645-14da8363655a 00:16:28.824 08:08:59 -- common/autotest_common.sh@640 -- # local es=0 00:16:28.824 08:08:59 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19396995-c975-4ec0-9645-14da8363655a 00:16:28.824 08:08:59 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:28.824 08:08:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:28.824 08:08:59 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:28.824 08:08:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:28.824 08:08:59 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:28.824 08:08:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:28.824 08:08:59 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:28.824 08:08:59 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:28.824 08:08:59 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19396995-c975-4ec0-9645-14da8363655a 00:16:28.824 request: 00:16:28.824 { 00:16:28.824 "uuid": "19396995-c975-4ec0-9645-14da8363655a", 00:16:28.824 "method": "bdev_lvol_get_lvstores", 00:16:28.824 "req_id": 1 00:16:28.824 } 00:16:28.824 Got JSON-RPC error response 00:16:28.824 response: 00:16:28.824 { 00:16:28.824 "code": -19, 00:16:28.824 "message": "No such device" 00:16:28.824 } 00:16:28.824 08:08:59 -- common/autotest_common.sh@643 -- # es=1 00:16:28.824 08:08:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:28.824 08:08:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:28.824 08:08:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:28.824 08:08:59 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:29.086 aio_bdev 00:16:29.086 08:08:59 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 9d7fc84f-ce98-424e-9928-41cec6ed75b0 00:16:29.086 08:08:59 -- common/autotest_common.sh@887 -- # local bdev_name=9d7fc84f-ce98-424e-9928-41cec6ed75b0 00:16:29.086 08:08:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:29.086 08:08:59 -- common/autotest_common.sh@889 -- # local i 00:16:29.086 08:08:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:29.086 08:08:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:29.086 08:08:59 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:29.086 08:08:59 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9d7fc84f-ce98-424e-9928-41cec6ed75b0 -t 2000 00:16:29.347 [ 00:16:29.347 { 00:16:29.347 "name": "9d7fc84f-ce98-424e-9928-41cec6ed75b0", 00:16:29.347 "aliases": [ 00:16:29.347 "lvs/lvol" 00:16:29.347 ], 00:16:29.347 "product_name": "Logical Volume", 00:16:29.347 "block_size": 4096, 00:16:29.347 "num_blocks": 38912, 00:16:29.347 "uuid": "9d7fc84f-ce98-424e-9928-41cec6ed75b0", 00:16:29.347 "assigned_rate_limits": { 00:16:29.347 "rw_ios_per_sec": 0, 00:16:29.347 "rw_mbytes_per_sec": 0, 00:16:29.347 "r_mbytes_per_sec": 0, 00:16:29.347 "w_mbytes_per_sec": 0 00:16:29.347 }, 00:16:29.347 "claimed": false, 00:16:29.347 "zoned": false, 00:16:29.347 "supported_io_types": { 00:16:29.347 "read": true, 00:16:29.347 "write": true, 00:16:29.347 "unmap": true, 00:16:29.347 "write_zeroes": true, 00:16:29.347 "flush": false, 00:16:29.347 "reset": true, 00:16:29.347 "compare": false, 00:16:29.347 "compare_and_write": false, 00:16:29.347 "abort": false, 00:16:29.347 "nvme_admin": false, 00:16:29.347 "nvme_io": false 00:16:29.347 }, 00:16:29.347 "driver_specific": { 00:16:29.347 "lvol": { 00:16:29.347 "lvol_store_uuid": "19396995-c975-4ec0-9645-14da8363655a", 00:16:29.347 "base_bdev": "aio_bdev", 00:16:29.347 "thin_provision": false, 00:16:29.347 "snapshot": false, 00:16:29.347 "clone": false, 00:16:29.347 "esnap_clone": false 00:16:29.347 } 00:16:29.347 } 00:16:29.347 } 00:16:29.347 ] 00:16:29.347 08:08:59 -- common/autotest_common.sh@895 -- # return 0 00:16:29.347 08:08:59 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19396995-c975-4ec0-9645-14da8363655a 00:16:29.347 08:08:59 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:16:29.608 08:08:59 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:16:29.608 08:09:00 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19396995-c975-4ec0-9645-14da8363655a 00:16:29.608 08:09:00 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:16:29.608 08:09:00 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:16:29.608 08:09:00 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9d7fc84f-ce98-424e-9928-41cec6ed75b0 00:16:29.869 08:09:00 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 19396995-c975-4ec0-9645-14da8363655a 00:16:29.869 08:09:00 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:30.128 08:09:00 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:30.128 00:16:30.128 real 0m15.044s 00:16:30.129 user 0m14.459s 00:16:30.129 sys 0m1.375s 00:16:30.129 08:09:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:30.129 08:09:00 -- common/autotest_common.sh@10 -- # set +x 00:16:30.129 ************************************ 00:16:30.129 END TEST lvs_grow_clean 00:16:30.129 ************************************ 00:16:30.129 08:09:00 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:30.129 08:09:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:30.129 08:09:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:30.129 08:09:00 -- common/autotest_common.sh@10 -- # set +x 00:16:30.129 ************************************ 00:16:30.129 START TEST lvs_grow_dirty 00:16:30.129 ************************************ 00:16:30.129 08:09:00 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:16:30.129 08:09:00 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:30.129 08:09:00 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:30.129 08:09:00 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:30.129 08:09:00 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:30.129 08:09:00 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:30.129 08:09:00 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:30.129 08:09:00 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:30.129 08:09:00 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:30.129 08:09:00 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:30.389 08:09:00 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:30.389 08:09:00 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:30.650 08:09:01 -- target/nvmf_lvs_grow.sh@28 -- # lvs=c3839066-2c28-476a-ad3d-0f644aa6c098 00:16:30.650 08:09:01 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3839066-2c28-476a-ad3d-0f644aa6c098 00:16:30.650 08:09:01 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:30.650 08:09:01 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:30.650 08:09:01 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:30.650 08:09:01 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c3839066-2c28-476a-ad3d-0f644aa6c098 lvol 150 00:16:30.910 08:09:01 -- target/nvmf_lvs_grow.sh@33 -- # lvol=847b5668-6992-48f7-88ce-e253607c43dd 00:16:30.910 08:09:01 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:30.910 08:09:01 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:30.910 [2024-06-11 08:09:01.515073] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:30.910 [2024-06-11 08:09:01.515125] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:30.910 true 00:16:30.910 08:09:01 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3839066-2c28-476a-ad3d-0f644aa6c098 00:16:30.910 08:09:01 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:31.170 08:09:01 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:31.170 08:09:01 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:31.430 08:09:01 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 847b5668-6992-48f7-88ce-e253607c43dd 00:16:31.430 08:09:01 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:31.690 08:09:02 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:31.690 08:09:02 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1015828 00:16:31.690 08:09:02 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:31.690 08:09:02 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:31.690 08:09:02 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1015828 /var/tmp/bdevperf.sock 00:16:31.690 08:09:02 -- common/autotest_common.sh@819 -- # '[' -z 1015828 ']' 00:16:31.690 08:09:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:31.690 08:09:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:31.690 08:09:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:31.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:31.690 08:09:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:31.690 08:09:02 -- common/autotest_common.sh@10 -- # set +x 00:16:31.690 [2024-06-11 08:09:02.317121] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:31.690 [2024-06-11 08:09:02.317171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1015828 ] 00:16:31.949 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.949 [2024-06-11 08:09:02.392667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.949 [2024-06-11 08:09:02.444722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.517 08:09:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:32.517 08:09:03 -- common/autotest_common.sh@852 -- # return 0 00:16:32.517 08:09:03 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:32.777 Nvme0n1 00:16:32.777 08:09:03 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:33.037 [ 00:16:33.037 { 00:16:33.037 "name": "Nvme0n1", 00:16:33.037 "aliases": [ 00:16:33.037 "847b5668-6992-48f7-88ce-e253607c43dd" 00:16:33.037 ], 00:16:33.037 "product_name": "NVMe disk", 00:16:33.037 "block_size": 4096, 00:16:33.037 "num_blocks": 38912, 00:16:33.037 "uuid": "847b5668-6992-48f7-88ce-e253607c43dd", 00:16:33.037 "assigned_rate_limits": { 00:16:33.037 "rw_ios_per_sec": 0, 00:16:33.037 "rw_mbytes_per_sec": 0, 00:16:33.037 "r_mbytes_per_sec": 0, 00:16:33.037 "w_mbytes_per_sec": 0 00:16:33.037 }, 00:16:33.037 "claimed": false, 00:16:33.037 "zoned": false, 00:16:33.037 "supported_io_types": { 00:16:33.037 "read": true, 00:16:33.037 "write": true, 00:16:33.037 "unmap": true, 00:16:33.037 "write_zeroes": true, 00:16:33.037 "flush": true, 00:16:33.037 "reset": true, 00:16:33.037 "compare": true, 00:16:33.037 "compare_and_write": true, 00:16:33.037 "abort": true, 00:16:33.037 "nvme_admin": true, 00:16:33.037 "nvme_io": true 00:16:33.037 }, 00:16:33.037 "driver_specific": { 00:16:33.037 "nvme": [ 00:16:33.037 { 00:16:33.037 "trid": { 00:16:33.037 "trtype": "TCP", 00:16:33.037 "adrfam": "IPv4", 00:16:33.037 "traddr": "10.0.0.2", 00:16:33.037 "trsvcid": "4420", 00:16:33.037 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:33.037 }, 00:16:33.037 "ctrlr_data": { 00:16:33.037 "cntlid": 1, 00:16:33.037 "vendor_id": "0x8086", 00:16:33.037 "model_number": "SPDK bdev Controller", 00:16:33.037 "serial_number": "SPDK0", 00:16:33.037 "firmware_revision": "24.01.1", 00:16:33.037 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:33.037 "oacs": { 00:16:33.037 "security": 0, 00:16:33.037 "format": 0, 00:16:33.037 "firmware": 0, 00:16:33.037 "ns_manage": 0 00:16:33.037 }, 00:16:33.037 "multi_ctrlr": true, 00:16:33.037 "ana_reporting": false 00:16:33.037 }, 00:16:33.037 "vs": { 00:16:33.037 "nvme_version": "1.3" 00:16:33.037 }, 00:16:33.037 "ns_data": { 00:16:33.037 "id": 1, 00:16:33.037 "can_share": true 00:16:33.037 } 00:16:33.037 } 00:16:33.037 ], 00:16:33.037 "mp_policy": "active_passive" 00:16:33.037 } 00:16:33.037 } 00:16:33.037 ] 00:16:33.037 08:09:03 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1016149 00:16:33.037 08:09:03 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:33.037 08:09:03 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:33.037 Running I/O for 10 seconds... 00:16:33.977 Latency(us) 00:16:33.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:33.977 Nvme0n1 : 1.00 18430.00 71.99 0.00 0.00 0.00 0.00 0.00 00:16:33.977 =================================================================================================================== 00:16:33.977 Total : 18430.00 71.99 0.00 0.00 0.00 0.00 0.00 00:16:33.977 00:16:34.918 08:09:05 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c3839066-2c28-476a-ad3d-0f644aa6c098 00:16:34.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:34.918 Nvme0n1 : 2.00 18556.00 72.48 0.00 0.00 0.00 0.00 0.00 00:16:34.918 =================================================================================================================== 00:16:34.918 Total : 18556.00 72.48 0.00 0.00 0.00 0.00 0.00 00:16:34.918 00:16:35.178 true 00:16:35.178 08:09:05 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:35.178 08:09:05 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3839066-2c28-476a-ad3d-0f644aa6c098 00:16:35.178 08:09:05 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:35.178 08:09:05 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:35.178 08:09:05 -- target/nvmf_lvs_grow.sh@65 -- # wait 1016149 00:16:36.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:36.129 Nvme0n1 : 3.00 18618.33 72.73 0.00 0.00 0.00 0.00 0.00 00:16:36.129 =================================================================================================================== 00:16:36.129 Total : 18618.33 72.73 0.00 0.00 0.00 0.00 0.00 00:16:36.129 00:16:37.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:37.152 Nvme0n1 : 4.00 18666.75 72.92 0.00 0.00 0.00 0.00 0.00 00:16:37.152 =================================================================================================================== 00:16:37.152 Total : 18666.75 72.92 0.00 0.00 0.00 0.00 0.00 00:16:37.152 00:16:38.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:38.152 Nvme0n1 : 5.00 18695.80 73.03 0.00 0.00 0.00 0.00 0.00 00:16:38.152 =================================================================================================================== 00:16:38.152 Total : 18695.80 73.03 0.00 0.00 0.00 0.00 0.00 00:16:38.152 00:16:39.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.093 Nvme0n1 : 6.00 18714.83 73.10 0.00 0.00 0.00 0.00 0.00 00:16:39.093 =================================================================================================================== 00:16:39.093 Total : 18714.83 73.10 0.00 0.00 0.00 0.00 0.00 00:16:39.093 00:16:40.035 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:40.035 Nvme0n1 : 7.00 18718.57 73.12 0.00 0.00 0.00 0.00 0.00 00:16:40.035 =================================================================================================================== 00:16:40.035 Total : 18718.57 73.12 0.00 0.00 0.00 0.00 0.00 00:16:40.035 00:16:40.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:40.977 Nvme0n1 : 8.00 18730.62 73.17 0.00 0.00 0.00 0.00 0.00 00:16:40.977 =================================================================================================================== 00:16:40.977 Total : 18730.62 73.17 0.00 0.00 0.00 0.00 0.00 00:16:40.977 00:16:41.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:41.940 Nvme0n1 : 9.00 18739.22 73.20 0.00 0.00 0.00 0.00 0.00 00:16:41.940 =================================================================================================================== 00:16:41.940 Total : 18739.22 73.20 0.00 0.00 0.00 0.00 0.00 00:16:41.940 00:16:43.325 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:43.325 Nvme0n1 : 10.00 18759.10 73.28 0.00 0.00 0.00 0.00 0.00 00:16:43.325 =================================================================================================================== 00:16:43.325 Total : 18759.10 73.28 0.00 0.00 0.00 0.00 0.00 00:16:43.325 00:16:43.325 00:16:43.325 Latency(us) 00:16:43.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.325 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:43.325 Nvme0n1 : 10.01 18759.38 73.28 0.00 0.00 6819.60 4096.00 17694.72 00:16:43.325 =================================================================================================================== 00:16:43.325 Total : 18759.38 73.28 0.00 0.00 6819.60 4096.00 17694.72 00:16:43.325 0 00:16:43.325 08:09:13 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1015828 00:16:43.325 08:09:13 -- common/autotest_common.sh@926 -- # '[' -z 1015828 ']' 00:16:43.325 08:09:13 -- common/autotest_common.sh@930 -- # kill -0 1015828 00:16:43.325 08:09:13 -- common/autotest_common.sh@931 -- # uname 00:16:43.325 08:09:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:43.325 08:09:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1015828 00:16:43.325 08:09:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:43.325 08:09:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:43.325 08:09:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1015828' 00:16:43.325 killing process with pid 1015828 00:16:43.325 08:09:13 -- common/autotest_common.sh@945 -- # kill 1015828 00:16:43.325 Received shutdown signal, test time was about 10.000000 seconds 00:16:43.325 00:16:43.325 Latency(us) 00:16:43.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.325 =================================================================================================================== 00:16:43.325 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:43.325 08:09:13 -- common/autotest_common.sh@950 -- # wait 1015828 00:16:43.325 08:09:13 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:43.325 08:09:13 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3839066-2c28-476a-ad3d-0f644aa6c098 00:16:43.325 08:09:13 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:16:43.586 08:09:14 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:16:43.586 08:09:14 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:16:43.586 08:09:14 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1012211 00:16:43.586 08:09:14 -- target/nvmf_lvs_grow.sh@74 -- # wait 1012211 00:16:43.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1012211 Killed "${NVMF_APP[@]}" "$@" 00:16:43.586 08:09:14 -- target/nvmf_lvs_grow.sh@74 -- # true 00:16:43.587 08:09:14 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:16:43.587 08:09:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:43.587 08:09:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:43.587 08:09:14 -- common/autotest_common.sh@10 -- # set +x 00:16:43.587 08:09:14 -- nvmf/common.sh@469 -- # nvmfpid=1018668 00:16:43.587 08:09:14 -- nvmf/common.sh@470 -- # waitforlisten 1018668 00:16:43.587 08:09:14 -- common/autotest_common.sh@819 -- # '[' -z 1018668 ']' 00:16:43.587 08:09:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:43.587 08:09:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.587 08:09:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:43.587 08:09:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.587 08:09:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:43.587 08:09:14 -- common/autotest_common.sh@10 -- # set +x 00:16:43.587 [2024-06-11 08:09:14.170809] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:43.587 [2024-06-11 08:09:14.170861] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.587 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.848 [2024-06-11 08:09:14.237698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.848 [2024-06-11 08:09:14.301671] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:43.848 [2024-06-11 08:09:14.301786] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.848 [2024-06-11 08:09:14.301793] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.848 [2024-06-11 08:09:14.301800] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.848 [2024-06-11 08:09:14.301818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.419 08:09:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:44.419 08:09:14 -- common/autotest_common.sh@852 -- # return 0 00:16:44.419 08:09:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:44.419 08:09:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:44.419 08:09:14 -- common/autotest_common.sh@10 -- # set +x 00:16:44.419 08:09:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.419 08:09:14 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:44.679 [2024-06-11 08:09:15.090455] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:44.679 [2024-06-11 08:09:15.090548] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:44.679 [2024-06-11 08:09:15.090578] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:44.679 08:09:15 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:16:44.679 08:09:15 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 847b5668-6992-48f7-88ce-e253607c43dd 00:16:44.679 08:09:15 -- common/autotest_common.sh@887 -- # local bdev_name=847b5668-6992-48f7-88ce-e253607c43dd 00:16:44.679 08:09:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:44.679 08:09:15 -- common/autotest_common.sh@889 -- # local i 00:16:44.679 08:09:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:44.679 08:09:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:44.679 08:09:15 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:44.679 08:09:15 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 847b5668-6992-48f7-88ce-e253607c43dd -t 2000 00:16:44.940 [ 00:16:44.940 { 00:16:44.940 "name": "847b5668-6992-48f7-88ce-e253607c43dd", 00:16:44.940 "aliases": [ 00:16:44.940 "lvs/lvol" 00:16:44.940 ], 00:16:44.940 "product_name": "Logical Volume", 00:16:44.940 "block_size": 4096, 00:16:44.940 "num_blocks": 38912, 00:16:44.940 "uuid": "847b5668-6992-48f7-88ce-e253607c43dd", 00:16:44.940 "assigned_rate_limits": { 00:16:44.940 "rw_ios_per_sec": 0, 00:16:44.940 "rw_mbytes_per_sec": 0, 00:16:44.940 "r_mbytes_per_sec": 0, 00:16:44.940 "w_mbytes_per_sec": 0 00:16:44.940 }, 00:16:44.940 "claimed": false, 00:16:44.940 "zoned": false, 00:16:44.940 "supported_io_types": { 00:16:44.940 "read": true, 00:16:44.940 "write": true, 00:16:44.940 "unmap": true, 00:16:44.940 "write_zeroes": true, 00:16:44.940 "flush": false, 00:16:44.940 "reset": true, 00:16:44.940 "compare": false, 00:16:44.940 "compare_and_write": false, 00:16:44.940 "abort": false, 00:16:44.940 "nvme_admin": false, 00:16:44.940 "nvme_io": false 00:16:44.940 }, 00:16:44.940 "driver_specific": { 00:16:44.940 "lvol": { 00:16:44.940 "lvol_store_uuid": "c3839066-2c28-476a-ad3d-0f644aa6c098", 00:16:44.940 "base_bdev": "aio_bdev", 00:16:44.940 "thin_provision": false, 00:16:44.940 "snapshot": false, 00:16:44.940 "clone": false, 00:16:44.940 "esnap_clone": false 00:16:44.940 } 00:16:44.940 } 00:16:44.940 } 00:16:44.940 ] 00:16:44.940 08:09:15 -- common/autotest_common.sh@895 -- # return 0 00:16:44.940 08:09:15 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3839066-2c28-476a-ad3d-0f644aa6c098 00:16:44.940 08:09:15 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:16:44.940 08:09:15 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:16:44.940 08:09:15 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3839066-2c28-476a-ad3d-0f644aa6c098 00:16:44.940 08:09:15 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:16:45.201 08:09:15 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:16:45.201 08:09:15 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:45.201 [2024-06-11 08:09:15.794242] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:45.201 08:09:15 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3839066-2c28-476a-ad3d-0f644aa6c098 00:16:45.201 08:09:15 -- common/autotest_common.sh@640 -- # local es=0 00:16:45.201 08:09:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3839066-2c28-476a-ad3d-0f644aa6c098 00:16:45.201 08:09:15 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:45.201 08:09:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:45.201 08:09:15 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:45.201 08:09:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:45.201 08:09:15 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:45.201 08:09:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:45.201 08:09:15 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:45.201 08:09:15 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:45.201 08:09:15 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3839066-2c28-476a-ad3d-0f644aa6c098 00:16:45.462 request: 00:16:45.462 { 00:16:45.462 "uuid": "c3839066-2c28-476a-ad3d-0f644aa6c098", 00:16:45.462 "method": "bdev_lvol_get_lvstores", 00:16:45.462 "req_id": 1 00:16:45.462 } 00:16:45.462 Got JSON-RPC error response 00:16:45.462 response: 00:16:45.462 { 00:16:45.462 "code": -19, 00:16:45.462 "message": "No such device" 00:16:45.462 } 00:16:45.462 08:09:15 -- common/autotest_common.sh@643 -- # es=1 00:16:45.462 08:09:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:45.462 08:09:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:45.462 08:09:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:45.462 08:09:15 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:45.722 aio_bdev 00:16:45.722 08:09:16 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 847b5668-6992-48f7-88ce-e253607c43dd 00:16:45.722 08:09:16 -- common/autotest_common.sh@887 -- # local bdev_name=847b5668-6992-48f7-88ce-e253607c43dd 00:16:45.722 08:09:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:45.722 08:09:16 -- common/autotest_common.sh@889 -- # local i 00:16:45.722 08:09:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:45.722 08:09:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:45.722 08:09:16 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:45.722 08:09:16 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 847b5668-6992-48f7-88ce-e253607c43dd -t 2000 00:16:45.983 [ 00:16:45.983 { 00:16:45.983 "name": "847b5668-6992-48f7-88ce-e253607c43dd", 00:16:45.983 "aliases": [ 00:16:45.983 "lvs/lvol" 00:16:45.983 ], 00:16:45.983 "product_name": "Logical Volume", 00:16:45.983 "block_size": 4096, 00:16:45.983 "num_blocks": 38912, 00:16:45.983 "uuid": "847b5668-6992-48f7-88ce-e253607c43dd", 00:16:45.983 "assigned_rate_limits": { 00:16:45.983 "rw_ios_per_sec": 0, 00:16:45.983 "rw_mbytes_per_sec": 0, 00:16:45.983 "r_mbytes_per_sec": 0, 00:16:45.983 "w_mbytes_per_sec": 0 00:16:45.983 }, 00:16:45.983 "claimed": false, 00:16:45.983 "zoned": false, 00:16:45.983 "supported_io_types": { 00:16:45.983 "read": true, 00:16:45.983 "write": true, 00:16:45.983 "unmap": true, 00:16:45.983 "write_zeroes": true, 00:16:45.983 "flush": false, 00:16:45.983 "reset": true, 00:16:45.983 "compare": false, 00:16:45.983 "compare_and_write": false, 00:16:45.983 "abort": false, 00:16:45.983 "nvme_admin": false, 00:16:45.983 "nvme_io": false 00:16:45.983 }, 00:16:45.983 "driver_specific": { 00:16:45.983 "lvol": { 00:16:45.983 "lvol_store_uuid": "c3839066-2c28-476a-ad3d-0f644aa6c098", 00:16:45.983 "base_bdev": "aio_bdev", 00:16:45.983 "thin_provision": false, 00:16:45.983 "snapshot": false, 00:16:45.983 "clone": false, 00:16:45.983 "esnap_clone": false 00:16:45.983 } 00:16:45.983 } 00:16:45.983 } 00:16:45.983 ] 00:16:45.983 08:09:16 -- common/autotest_common.sh@895 -- # return 0 00:16:45.983 08:09:16 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3839066-2c28-476a-ad3d-0f644aa6c098 00:16:45.983 08:09:16 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:16:45.983 08:09:16 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:16:45.983 08:09:16 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3839066-2c28-476a-ad3d-0f644aa6c098 00:16:45.983 08:09:16 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:16:46.244 08:09:16 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:16:46.244 08:09:16 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 847b5668-6992-48f7-88ce-e253607c43dd 00:16:46.244 08:09:16 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c3839066-2c28-476a-ad3d-0f644aa6c098 00:16:46.505 08:09:17 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:46.766 08:09:17 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:46.766 00:16:46.766 real 0m16.515s 00:16:46.766 user 0m43.506s 00:16:46.766 sys 0m2.721s 00:16:46.766 08:09:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:46.766 08:09:17 -- common/autotest_common.sh@10 -- # set +x 00:16:46.766 ************************************ 00:16:46.766 END TEST lvs_grow_dirty 00:16:46.766 ************************************ 00:16:46.766 08:09:17 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:46.766 08:09:17 -- common/autotest_common.sh@796 -- # type=--id 00:16:46.766 08:09:17 -- common/autotest_common.sh@797 -- # id=0 00:16:46.766 08:09:17 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:16:46.766 08:09:17 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:46.766 08:09:17 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:16:46.766 08:09:17 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:16:46.766 08:09:17 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:16:46.766 08:09:17 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:46.766 nvmf_trace.0 00:16:46.766 08:09:17 -- common/autotest_common.sh@811 -- # return 0 00:16:46.766 08:09:17 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:46.766 08:09:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:46.766 08:09:17 -- nvmf/common.sh@116 -- # sync 00:16:46.766 08:09:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:46.766 08:09:17 -- nvmf/common.sh@119 -- # set +e 00:16:46.766 08:09:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:46.766 08:09:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:46.766 rmmod nvme_tcp 00:16:46.766 rmmod nvme_fabrics 00:16:46.766 rmmod nvme_keyring 00:16:46.766 08:09:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:46.766 08:09:17 -- nvmf/common.sh@123 -- # set -e 00:16:46.766 08:09:17 -- nvmf/common.sh@124 -- # return 0 00:16:46.766 08:09:17 -- nvmf/common.sh@477 -- # '[' -n 1018668 ']' 00:16:46.766 08:09:17 -- nvmf/common.sh@478 -- # killprocess 1018668 00:16:46.766 08:09:17 -- common/autotest_common.sh@926 -- # '[' -z 1018668 ']' 00:16:46.766 08:09:17 -- common/autotest_common.sh@930 -- # kill -0 1018668 00:16:46.766 08:09:17 -- common/autotest_common.sh@931 -- # uname 00:16:46.766 08:09:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:46.766 08:09:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1018668 00:16:47.026 08:09:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:47.026 08:09:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:47.026 08:09:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1018668' 00:16:47.026 killing process with pid 1018668 00:16:47.026 08:09:17 -- common/autotest_common.sh@945 -- # kill 1018668 00:16:47.026 08:09:17 -- common/autotest_common.sh@950 -- # wait 1018668 00:16:47.026 08:09:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:47.026 08:09:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:47.026 08:09:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:47.026 08:09:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:47.026 08:09:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:47.026 08:09:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.026 08:09:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.027 08:09:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.573 08:09:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:49.573 00:16:49.573 real 0m42.388s 00:16:49.573 user 1m3.739s 00:16:49.573 sys 0m9.789s 00:16:49.573 08:09:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:49.573 08:09:19 -- common/autotest_common.sh@10 -- # set +x 00:16:49.573 ************************************ 00:16:49.573 END TEST nvmf_lvs_grow 00:16:49.573 ************************************ 00:16:49.573 08:09:19 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:49.573 08:09:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:49.573 08:09:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:49.573 08:09:19 -- common/autotest_common.sh@10 -- # set +x 00:16:49.573 ************************************ 00:16:49.573 START TEST nvmf_bdev_io_wait 00:16:49.573 ************************************ 00:16:49.573 08:09:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:49.573 * Looking for test storage... 00:16:49.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:49.573 08:09:19 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:49.573 08:09:19 -- nvmf/common.sh@7 -- # uname -s 00:16:49.573 08:09:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.573 08:09:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.573 08:09:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.573 08:09:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.573 08:09:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.573 08:09:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.573 08:09:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.573 08:09:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.573 08:09:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.573 08:09:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.573 08:09:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:49.573 08:09:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:49.573 08:09:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.573 08:09:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.573 08:09:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:49.573 08:09:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:49.573 08:09:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.573 08:09:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.573 08:09:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.573 08:09:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.573 08:09:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.573 08:09:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.573 08:09:19 -- paths/export.sh@5 -- # export PATH 00:16:49.573 08:09:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.573 08:09:19 -- nvmf/common.sh@46 -- # : 0 00:16:49.573 08:09:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:49.573 08:09:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:49.573 08:09:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:49.573 08:09:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.573 08:09:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.573 08:09:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:49.573 08:09:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:49.573 08:09:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:49.573 08:09:19 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:49.573 08:09:19 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:49.573 08:09:19 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:49.573 08:09:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:49.574 08:09:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.574 08:09:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:49.574 08:09:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:49.574 08:09:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:49.574 08:09:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.574 08:09:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.574 08:09:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.574 08:09:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:49.574 08:09:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:49.574 08:09:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:49.574 08:09:19 -- common/autotest_common.sh@10 -- # set +x 00:16:56.164 08:09:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:56.164 08:09:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:56.164 08:09:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:56.164 08:09:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:56.164 08:09:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:56.164 08:09:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:56.164 08:09:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:56.164 08:09:26 -- nvmf/common.sh@294 -- # net_devs=() 00:16:56.164 08:09:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:56.164 08:09:26 -- nvmf/common.sh@295 -- # e810=() 00:16:56.164 08:09:26 -- nvmf/common.sh@295 -- # local -ga e810 00:16:56.164 08:09:26 -- nvmf/common.sh@296 -- # x722=() 00:16:56.164 08:09:26 -- nvmf/common.sh@296 -- # local -ga x722 00:16:56.164 08:09:26 -- nvmf/common.sh@297 -- # mlx=() 00:16:56.164 08:09:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:56.165 08:09:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:56.165 08:09:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:56.165 08:09:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:56.165 08:09:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:56.165 08:09:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:56.165 08:09:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:56.165 08:09:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:56.165 08:09:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:56.165 08:09:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:56.165 08:09:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:56.165 08:09:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:56.165 08:09:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:56.165 08:09:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:56.165 08:09:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:56.165 08:09:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:56.165 08:09:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:56.165 08:09:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:56.165 08:09:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:56.165 08:09:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:56.165 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:56.165 08:09:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:56.165 08:09:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:56.165 08:09:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.165 08:09:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.165 08:09:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:56.165 08:09:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:56.165 08:09:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:56.165 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:56.165 08:09:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:56.165 08:09:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:56.165 08:09:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:56.165 08:09:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:56.165 08:09:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:56.165 08:09:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:56.165 08:09:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:56.165 08:09:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:56.165 08:09:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:56.165 08:09:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.165 08:09:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:56.165 08:09:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.165 08:09:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:56.165 Found net devices under 0000:31:00.0: cvl_0_0 00:16:56.165 08:09:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.165 08:09:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:56.165 08:09:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:56.165 08:09:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:56.165 08:09:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:56.165 08:09:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:56.165 Found net devices under 0000:31:00.1: cvl_0_1 00:16:56.165 08:09:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:56.165 08:09:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:56.165 08:09:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:56.165 08:09:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:56.165 08:09:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:56.165 08:09:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:56.165 08:09:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.165 08:09:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.165 08:09:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:56.165 08:09:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:56.165 08:09:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:56.165 08:09:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:56.165 08:09:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:56.165 08:09:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:56.165 08:09:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.165 08:09:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:56.165 08:09:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:56.165 08:09:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:56.165 08:09:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:56.425 08:09:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:56.425 08:09:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:56.425 08:09:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:56.425 08:09:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:56.425 08:09:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:56.425 08:09:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:56.425 08:09:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:56.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:16:56.425 00:16:56.425 --- 10.0.0.2 ping statistics --- 00:16:56.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.425 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:16:56.425 08:09:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:56.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:16:56.425 00:16:56.425 --- 10.0.0.1 ping statistics --- 00:16:56.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.425 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:16:56.425 08:09:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.425 08:09:27 -- nvmf/common.sh@410 -- # return 0 00:16:56.425 08:09:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:56.425 08:09:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.425 08:09:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:56.426 08:09:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:56.426 08:09:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.426 08:09:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:56.426 08:09:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:56.426 08:09:27 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:56.426 08:09:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:56.426 08:09:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:56.426 08:09:27 -- common/autotest_common.sh@10 -- # set +x 00:16:56.426 08:09:27 -- nvmf/common.sh@469 -- # nvmfpid=1023486 00:16:56.426 08:09:27 -- nvmf/common.sh@470 -- # waitforlisten 1023486 00:16:56.426 08:09:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:56.426 08:09:27 -- common/autotest_common.sh@819 -- # '[' -z 1023486 ']' 00:16:56.426 08:09:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.426 08:09:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:56.426 08:09:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.426 08:09:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:56.426 08:09:27 -- common/autotest_common.sh@10 -- # set +x 00:16:56.686 [2024-06-11 08:09:27.090082] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:56.686 [2024-06-11 08:09:27.090135] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.686 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.686 [2024-06-11 08:09:27.159024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:56.686 [2024-06-11 08:09:27.227917] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:56.686 [2024-06-11 08:09:27.228050] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.686 [2024-06-11 08:09:27.228061] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.686 [2024-06-11 08:09:27.228071] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.686 [2024-06-11 08:09:27.228208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.686 [2024-06-11 08:09:27.228308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.686 [2024-06-11 08:09:27.228462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:56.686 [2024-06-11 08:09:27.228482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.257 08:09:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:57.257 08:09:27 -- common/autotest_common.sh@852 -- # return 0 00:16:57.257 08:09:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:57.257 08:09:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:57.257 08:09:27 -- common/autotest_common.sh@10 -- # set +x 00:16:57.257 08:09:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.257 08:09:27 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:57.257 08:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:57.257 08:09:27 -- common/autotest_common.sh@10 -- # set +x 00:16:57.518 08:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:57.518 08:09:27 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:57.518 08:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:57.518 08:09:27 -- common/autotest_common.sh@10 -- # set +x 00:16:57.518 08:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:57.518 08:09:27 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:57.518 08:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:57.518 08:09:27 -- common/autotest_common.sh@10 -- # set +x 00:16:57.518 [2024-06-11 08:09:27.964574] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.518 08:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:57.518 08:09:27 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:57.518 08:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:57.518 08:09:27 -- common/autotest_common.sh@10 -- # set +x 00:16:57.518 Malloc0 00:16:57.518 08:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:57.518 08:09:28 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:57.518 08:09:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:57.518 08:09:28 -- common/autotest_common.sh@10 -- # set +x 00:16:57.518 08:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:57.518 08:09:28 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:57.518 08:09:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:57.518 08:09:28 -- common/autotest_common.sh@10 -- # set +x 00:16:57.518 08:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:57.518 08:09:28 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:57.518 08:09:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:57.518 08:09:28 -- common/autotest_common.sh@10 -- # set +x 00:16:57.518 [2024-06-11 08:09:28.032721] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.518 08:09:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:57.518 08:09:28 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1023842 00:16:57.518 08:09:28 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:57.518 08:09:28 -- target/bdev_io_wait.sh@30 -- # READ_PID=1023844 00:16:57.518 08:09:28 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:57.518 08:09:28 -- nvmf/common.sh@520 -- # config=() 00:16:57.518 08:09:28 -- nvmf/common.sh@520 -- # local subsystem config 00:16:57.518 08:09:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:57.518 08:09:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:57.518 { 00:16:57.518 "params": { 00:16:57.518 "name": "Nvme$subsystem", 00:16:57.518 "trtype": "$TEST_TRANSPORT", 00:16:57.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:57.518 "adrfam": "ipv4", 00:16:57.518 "trsvcid": "$NVMF_PORT", 00:16:57.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:57.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:57.518 "hdgst": ${hdgst:-false}, 00:16:57.518 "ddgst": ${ddgst:-false} 00:16:57.518 }, 00:16:57.518 "method": "bdev_nvme_attach_controller" 00:16:57.518 } 00:16:57.518 EOF 00:16:57.518 )") 00:16:57.518 08:09:28 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1023846 00:16:57.518 08:09:28 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:57.518 08:09:28 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:57.518 08:09:28 -- nvmf/common.sh@520 -- # config=() 00:16:57.518 08:09:28 -- nvmf/common.sh@520 -- # local subsystem config 00:16:57.518 08:09:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:57.518 08:09:28 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1023849 00:16:57.518 08:09:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:57.518 { 00:16:57.518 "params": { 00:16:57.518 "name": "Nvme$subsystem", 00:16:57.518 "trtype": "$TEST_TRANSPORT", 00:16:57.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:57.518 "adrfam": "ipv4", 00:16:57.518 "trsvcid": "$NVMF_PORT", 00:16:57.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:57.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:57.518 "hdgst": ${hdgst:-false}, 00:16:57.518 "ddgst": ${ddgst:-false} 00:16:57.518 }, 00:16:57.518 "method": "bdev_nvme_attach_controller" 00:16:57.518 } 00:16:57.518 EOF 00:16:57.518 )") 00:16:57.518 08:09:28 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:57.518 08:09:28 -- target/bdev_io_wait.sh@35 -- # sync 00:16:57.518 08:09:28 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:57.518 08:09:28 -- nvmf/common.sh@542 -- # cat 00:16:57.518 08:09:28 -- nvmf/common.sh@520 -- # config=() 00:16:57.518 08:09:28 -- nvmf/common.sh@520 -- # local subsystem config 00:16:57.518 08:09:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:57.518 08:09:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:57.518 { 00:16:57.518 "params": { 00:16:57.518 "name": "Nvme$subsystem", 00:16:57.518 "trtype": "$TEST_TRANSPORT", 00:16:57.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:57.518 "adrfam": "ipv4", 00:16:57.518 "trsvcid": "$NVMF_PORT", 00:16:57.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:57.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:57.518 "hdgst": ${hdgst:-false}, 00:16:57.518 "ddgst": ${ddgst:-false} 00:16:57.518 }, 00:16:57.518 "method": "bdev_nvme_attach_controller" 00:16:57.518 } 00:16:57.518 EOF 00:16:57.518 )") 00:16:57.518 08:09:28 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:57.518 08:09:28 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:57.518 08:09:28 -- nvmf/common.sh@520 -- # config=() 00:16:57.518 08:09:28 -- nvmf/common.sh@542 -- # cat 00:16:57.518 08:09:28 -- nvmf/common.sh@520 -- # local subsystem config 00:16:57.518 08:09:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:57.518 08:09:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:57.518 { 00:16:57.518 "params": { 00:16:57.518 "name": "Nvme$subsystem", 00:16:57.518 "trtype": "$TEST_TRANSPORT", 00:16:57.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:57.518 "adrfam": "ipv4", 00:16:57.518 "trsvcid": "$NVMF_PORT", 00:16:57.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:57.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:57.518 "hdgst": ${hdgst:-false}, 00:16:57.518 "ddgst": ${ddgst:-false} 00:16:57.518 }, 00:16:57.518 "method": "bdev_nvme_attach_controller" 00:16:57.518 } 00:16:57.518 EOF 00:16:57.518 )") 00:16:57.518 08:09:28 -- nvmf/common.sh@542 -- # cat 00:16:57.518 08:09:28 -- target/bdev_io_wait.sh@37 -- # wait 1023842 00:16:57.518 08:09:28 -- nvmf/common.sh@542 -- # cat 00:16:57.518 08:09:28 -- nvmf/common.sh@544 -- # jq . 00:16:57.518 08:09:28 -- nvmf/common.sh@544 -- # jq . 00:16:57.518 08:09:28 -- nvmf/common.sh@544 -- # jq . 00:16:57.518 08:09:28 -- nvmf/common.sh@545 -- # IFS=, 00:16:57.518 08:09:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:57.518 "params": { 00:16:57.518 "name": "Nvme1", 00:16:57.518 "trtype": "tcp", 00:16:57.518 "traddr": "10.0.0.2", 00:16:57.518 "adrfam": "ipv4", 00:16:57.518 "trsvcid": "4420", 00:16:57.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.519 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:57.519 "hdgst": false, 00:16:57.519 "ddgst": false 00:16:57.519 }, 00:16:57.519 "method": "bdev_nvme_attach_controller" 00:16:57.519 }' 00:16:57.519 08:09:28 -- nvmf/common.sh@544 -- # jq . 00:16:57.519 08:09:28 -- nvmf/common.sh@545 -- # IFS=, 00:16:57.519 08:09:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:57.519 "params": { 00:16:57.519 "name": "Nvme1", 00:16:57.519 "trtype": "tcp", 00:16:57.519 "traddr": "10.0.0.2", 00:16:57.519 "adrfam": "ipv4", 00:16:57.519 "trsvcid": "4420", 00:16:57.519 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.519 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:57.519 "hdgst": false, 00:16:57.519 "ddgst": false 00:16:57.519 }, 00:16:57.519 "method": "bdev_nvme_attach_controller" 00:16:57.519 }' 00:16:57.519 08:09:28 -- nvmf/common.sh@545 -- # IFS=, 00:16:57.519 08:09:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:57.519 "params": { 00:16:57.519 "name": "Nvme1", 00:16:57.519 "trtype": "tcp", 00:16:57.519 "traddr": "10.0.0.2", 00:16:57.519 "adrfam": "ipv4", 00:16:57.519 "trsvcid": "4420", 00:16:57.519 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.519 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:57.519 "hdgst": false, 00:16:57.519 "ddgst": false 00:16:57.519 }, 00:16:57.519 "method": "bdev_nvme_attach_controller" 00:16:57.519 }' 00:16:57.519 08:09:28 -- nvmf/common.sh@545 -- # IFS=, 00:16:57.519 08:09:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:57.519 "params": { 00:16:57.519 "name": "Nvme1", 00:16:57.519 "trtype": "tcp", 00:16:57.519 "traddr": "10.0.0.2", 00:16:57.519 "adrfam": "ipv4", 00:16:57.519 "trsvcid": "4420", 00:16:57.519 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.519 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:57.519 "hdgst": false, 00:16:57.519 "ddgst": false 00:16:57.519 }, 00:16:57.519 "method": "bdev_nvme_attach_controller" 00:16:57.519 }' 00:16:57.519 [2024-06-11 08:09:28.083775] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:57.519 [2024-06-11 08:09:28.083826] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:57.519 [2024-06-11 08:09:28.084925] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:57.519 [2024-06-11 08:09:28.084970] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:57.519 [2024-06-11 08:09:28.085322] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:57.519 [2024-06-11 08:09:28.085366] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:57.519 [2024-06-11 08:09:28.085880] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:57.519 [2024-06-11 08:09:28.085920] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:57.519 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.779 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.779 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.779 [2024-06-11 08:09:28.229395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.779 [2024-06-11 08:09:28.275031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.779 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.779 [2024-06-11 08:09:28.279368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:57.779 [2024-06-11 08:09:28.322655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.779 [2024-06-11 08:09:28.322742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:57.779 [2024-06-11 08:09:28.370238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:57.779 [2024-06-11 08:09:28.371836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.779 [2024-06-11 08:09:28.419923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:16:58.038 Running I/O for 1 seconds... 00:16:58.038 Running I/O for 1 seconds... 00:16:58.038 Running I/O for 1 seconds... 00:16:58.298 Running I/O for 1 seconds... 00:16:59.239 00:16:59.239 Latency(us) 00:16:59.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.239 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:59.239 Nvme1n1 : 1.00 19998.41 78.12 0.00 0.00 6384.78 3959.47 15619.41 00:16:59.239 =================================================================================================================== 00:16:59.239 Total : 19998.41 78.12 0.00 0.00 6384.78 3959.47 15619.41 00:16:59.239 00:16:59.239 Latency(us) 00:16:59.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.239 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:59.239 Nvme1n1 : 1.01 12580.06 49.14 0.00 0.00 10143.76 5106.35 18677.76 00:16:59.239 =================================================================================================================== 00:16:59.239 Total : 12580.06 49.14 0.00 0.00 10143.76 5106.35 18677.76 00:16:59.239 00:16:59.239 Latency(us) 00:16:59.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.239 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:59.239 Nvme1n1 : 1.00 187451.45 732.23 0.00 0.00 680.18 267.95 781.65 00:16:59.239 =================================================================================================================== 00:16:59.239 Total : 187451.45 732.23 0.00 0.00 680.18 267.95 781.65 00:16:59.239 00:16:59.239 Latency(us) 00:16:59.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.239 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:59.239 Nvme1n1 : 1.00 13007.51 50.81 0.00 0.00 9813.55 4396.37 21736.11 00:16:59.239 =================================================================================================================== 00:16:59.239 Total : 13007.51 50.81 0.00 0.00 9813.55 4396.37 21736.11 00:16:59.239 08:09:29 -- target/bdev_io_wait.sh@38 -- # wait 1023844 00:16:59.239 08:09:29 -- target/bdev_io_wait.sh@39 -- # wait 1023846 00:16:59.239 08:09:29 -- target/bdev_io_wait.sh@40 -- # wait 1023849 00:16:59.239 08:09:29 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:59.239 08:09:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:59.239 08:09:29 -- common/autotest_common.sh@10 -- # set +x 00:16:59.239 08:09:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:59.239 08:09:29 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:59.239 08:09:29 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:59.239 08:09:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:59.239 08:09:29 -- nvmf/common.sh@116 -- # sync 00:16:59.239 08:09:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:59.239 08:09:29 -- nvmf/common.sh@119 -- # set +e 00:16:59.239 08:09:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:59.239 08:09:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:59.239 rmmod nvme_tcp 00:16:59.239 rmmod nvme_fabrics 00:16:59.499 rmmod nvme_keyring 00:16:59.499 08:09:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:59.499 08:09:29 -- nvmf/common.sh@123 -- # set -e 00:16:59.499 08:09:29 -- nvmf/common.sh@124 -- # return 0 00:16:59.499 08:09:29 -- nvmf/common.sh@477 -- # '[' -n 1023486 ']' 00:16:59.499 08:09:29 -- nvmf/common.sh@478 -- # killprocess 1023486 00:16:59.499 08:09:29 -- common/autotest_common.sh@926 -- # '[' -z 1023486 ']' 00:16:59.499 08:09:29 -- common/autotest_common.sh@930 -- # kill -0 1023486 00:16:59.499 08:09:29 -- common/autotest_common.sh@931 -- # uname 00:16:59.499 08:09:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:59.499 08:09:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1023486 00:16:59.499 08:09:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:59.499 08:09:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:59.499 08:09:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1023486' 00:16:59.499 killing process with pid 1023486 00:16:59.499 08:09:29 -- common/autotest_common.sh@945 -- # kill 1023486 00:16:59.499 08:09:29 -- common/autotest_common.sh@950 -- # wait 1023486 00:16:59.499 08:09:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:59.499 08:09:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:59.499 08:09:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:59.499 08:09:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:59.499 08:09:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:59.499 08:09:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.499 08:09:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.499 08:09:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.043 08:09:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:02.043 00:17:02.043 real 0m12.490s 00:17:02.043 user 0m18.960s 00:17:02.043 sys 0m6.694s 00:17:02.043 08:09:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:02.043 08:09:32 -- common/autotest_common.sh@10 -- # set +x 00:17:02.043 ************************************ 00:17:02.043 END TEST nvmf_bdev_io_wait 00:17:02.043 ************************************ 00:17:02.043 08:09:32 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:02.043 08:09:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:02.043 08:09:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:02.043 08:09:32 -- common/autotest_common.sh@10 -- # set +x 00:17:02.043 ************************************ 00:17:02.043 START TEST nvmf_queue_depth 00:17:02.043 ************************************ 00:17:02.043 08:09:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:02.043 * Looking for test storage... 00:17:02.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:02.043 08:09:32 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:02.043 08:09:32 -- nvmf/common.sh@7 -- # uname -s 00:17:02.043 08:09:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.043 08:09:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.043 08:09:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.043 08:09:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.043 08:09:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.043 08:09:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.043 08:09:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.043 08:09:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.043 08:09:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.043 08:09:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.043 08:09:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:02.043 08:09:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:02.043 08:09:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.043 08:09:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.043 08:09:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:02.043 08:09:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:02.043 08:09:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.043 08:09:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.043 08:09:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.043 08:09:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.043 08:09:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.043 08:09:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.043 08:09:32 -- paths/export.sh@5 -- # export PATH 00:17:02.043 08:09:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.043 08:09:32 -- nvmf/common.sh@46 -- # : 0 00:17:02.043 08:09:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:02.043 08:09:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:02.043 08:09:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:02.043 08:09:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.043 08:09:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.043 08:09:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:02.043 08:09:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:02.043 08:09:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:02.043 08:09:32 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:02.043 08:09:32 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:02.043 08:09:32 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:02.043 08:09:32 -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:02.043 08:09:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:02.043 08:09:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.043 08:09:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:02.043 08:09:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:02.043 08:09:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:02.043 08:09:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.043 08:09:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.043 08:09:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.043 08:09:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:02.043 08:09:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:02.043 08:09:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:02.043 08:09:32 -- common/autotest_common.sh@10 -- # set +x 00:17:08.630 08:09:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:08.630 08:09:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:08.630 08:09:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:08.630 08:09:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:08.630 08:09:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:08.630 08:09:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:08.630 08:09:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:08.630 08:09:39 -- nvmf/common.sh@294 -- # net_devs=() 00:17:08.630 08:09:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:08.630 08:09:39 -- nvmf/common.sh@295 -- # e810=() 00:17:08.630 08:09:39 -- nvmf/common.sh@295 -- # local -ga e810 00:17:08.630 08:09:39 -- nvmf/common.sh@296 -- # x722=() 00:17:08.630 08:09:39 -- nvmf/common.sh@296 -- # local -ga x722 00:17:08.630 08:09:39 -- nvmf/common.sh@297 -- # mlx=() 00:17:08.630 08:09:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:08.630 08:09:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.630 08:09:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.630 08:09:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.630 08:09:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.630 08:09:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.630 08:09:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.630 08:09:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.630 08:09:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.630 08:09:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.630 08:09:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.630 08:09:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.630 08:09:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:08.630 08:09:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:08.630 08:09:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:08.630 08:09:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:08.630 08:09:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:08.630 08:09:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:08.630 08:09:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:08.630 08:09:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:08.630 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:08.630 08:09:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:08.630 08:09:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:08.630 08:09:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.630 08:09:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.630 08:09:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:08.630 08:09:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:08.630 08:09:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:08.630 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:08.630 08:09:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:08.630 08:09:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:08.630 08:09:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.630 08:09:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.630 08:09:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:08.630 08:09:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:08.630 08:09:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:08.630 08:09:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:08.630 08:09:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:08.630 08:09:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.630 08:09:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:08.630 08:09:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.630 08:09:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:08.630 Found net devices under 0000:31:00.0: cvl_0_0 00:17:08.630 08:09:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.630 08:09:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:08.630 08:09:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.630 08:09:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:08.630 08:09:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.630 08:09:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:08.630 Found net devices under 0000:31:00.1: cvl_0_1 00:17:08.630 08:09:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.630 08:09:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:08.630 08:09:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:08.630 08:09:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:08.630 08:09:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:08.630 08:09:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:08.630 08:09:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.630 08:09:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.630 08:09:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.630 08:09:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:08.630 08:09:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.630 08:09:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.630 08:09:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:08.630 08:09:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.630 08:09:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.630 08:09:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:08.630 08:09:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:08.630 08:09:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.630 08:09:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.892 08:09:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.892 08:09:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.892 08:09:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:08.892 08:09:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.892 08:09:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.892 08:09:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.892 08:09:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:08.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:17:08.892 00:17:08.892 --- 10.0.0.2 ping statistics --- 00:17:08.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.892 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:17:08.892 08:09:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:17:08.892 00:17:08.892 --- 10.0.0.1 ping statistics --- 00:17:08.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.892 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:17:08.892 08:09:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.892 08:09:39 -- nvmf/common.sh@410 -- # return 0 00:17:08.892 08:09:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:08.892 08:09:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.892 08:09:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:08.892 08:09:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:08.892 08:09:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.892 08:09:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:08.892 08:09:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:08.892 08:09:39 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:08.892 08:09:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:08.892 08:09:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:08.892 08:09:39 -- common/autotest_common.sh@10 -- # set +x 00:17:08.892 08:09:39 -- nvmf/common.sh@469 -- # nvmfpid=1028287 00:17:08.892 08:09:39 -- nvmf/common.sh@470 -- # waitforlisten 1028287 00:17:08.892 08:09:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:08.892 08:09:39 -- common/autotest_common.sh@819 -- # '[' -z 1028287 ']' 00:17:08.892 08:09:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.892 08:09:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:08.892 08:09:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.892 08:09:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:08.892 08:09:39 -- common/autotest_common.sh@10 -- # set +x 00:17:09.154 [2024-06-11 08:09:39.574803] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:09.154 [2024-06-11 08:09:39.574865] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.154 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.154 [2024-06-11 08:09:39.662638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.154 [2024-06-11 08:09:39.754056] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:09.154 [2024-06-11 08:09:39.754209] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.154 [2024-06-11 08:09:39.754218] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.154 [2024-06-11 08:09:39.754227] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.154 [2024-06-11 08:09:39.754253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.727 08:09:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:09.727 08:09:40 -- common/autotest_common.sh@852 -- # return 0 00:17:09.727 08:09:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:09.727 08:09:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:09.727 08:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:09.988 08:09:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.988 08:09:40 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:09.988 08:09:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:09.988 08:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:09.988 [2024-06-11 08:09:40.405745] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.988 08:09:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:09.988 08:09:40 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:09.988 08:09:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:09.988 08:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:09.988 Malloc0 00:17:09.988 08:09:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:09.988 08:09:40 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:09.988 08:09:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:09.988 08:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:09.988 08:09:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:09.988 08:09:40 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:09.988 08:09:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:09.988 08:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:09.988 08:09:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:09.988 08:09:40 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.988 08:09:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:09.988 08:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:09.988 [2024-06-11 08:09:40.481328] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.988 08:09:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:09.988 08:09:40 -- target/queue_depth.sh@30 -- # bdevperf_pid=1028637 00:17:09.988 08:09:40 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:09.988 08:09:40 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:09.988 08:09:40 -- target/queue_depth.sh@33 -- # waitforlisten 1028637 /var/tmp/bdevperf.sock 00:17:09.988 08:09:40 -- common/autotest_common.sh@819 -- # '[' -z 1028637 ']' 00:17:09.988 08:09:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:09.988 08:09:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:09.988 08:09:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:09.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:09.988 08:09:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:09.989 08:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:09.989 [2024-06-11 08:09:40.534369] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:09.989 [2024-06-11 08:09:40.534430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028637 ] 00:17:09.989 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.989 [2024-06-11 08:09:40.599572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.250 [2024-06-11 08:09:40.673524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.821 08:09:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:10.821 08:09:41 -- common/autotest_common.sh@852 -- # return 0 00:17:10.821 08:09:41 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:10.821 08:09:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:10.821 08:09:41 -- common/autotest_common.sh@10 -- # set +x 00:17:10.821 NVMe0n1 00:17:10.821 08:09:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:10.821 08:09:41 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:10.821 Running I/O for 10 seconds... 00:17:23.059 00:17:23.059 Latency(us) 00:17:23.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.059 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:23.059 Verification LBA range: start 0x0 length 0x4000 00:17:23.059 NVMe0n1 : 10.04 18532.62 72.39 0.00 0.00 55092.95 10267.31 52428.80 00:17:23.059 =================================================================================================================== 00:17:23.059 Total : 18532.62 72.39 0.00 0.00 55092.95 10267.31 52428.80 00:17:23.059 0 00:17:23.059 08:09:51 -- target/queue_depth.sh@39 -- # killprocess 1028637 00:17:23.059 08:09:51 -- common/autotest_common.sh@926 -- # '[' -z 1028637 ']' 00:17:23.059 08:09:51 -- common/autotest_common.sh@930 -- # kill -0 1028637 00:17:23.059 08:09:51 -- common/autotest_common.sh@931 -- # uname 00:17:23.059 08:09:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:23.059 08:09:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1028637 00:17:23.059 08:09:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:23.059 08:09:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:23.059 08:09:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1028637' 00:17:23.059 killing process with pid 1028637 00:17:23.059 08:09:51 -- common/autotest_common.sh@945 -- # kill 1028637 00:17:23.059 Received shutdown signal, test time was about 10.000000 seconds 00:17:23.060 00:17:23.060 Latency(us) 00:17:23.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.060 =================================================================================================================== 00:17:23.060 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:23.060 08:09:51 -- common/autotest_common.sh@950 -- # wait 1028637 00:17:23.060 08:09:51 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:23.060 08:09:51 -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:23.060 08:09:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:23.060 08:09:51 -- nvmf/common.sh@116 -- # sync 00:17:23.060 08:09:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:23.060 08:09:51 -- nvmf/common.sh@119 -- # set +e 00:17:23.060 08:09:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:23.060 08:09:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:23.060 rmmod nvme_tcp 00:17:23.060 rmmod nvme_fabrics 00:17:23.060 rmmod nvme_keyring 00:17:23.060 08:09:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:23.060 08:09:51 -- nvmf/common.sh@123 -- # set -e 00:17:23.060 08:09:51 -- nvmf/common.sh@124 -- # return 0 00:17:23.060 08:09:51 -- nvmf/common.sh@477 -- # '[' -n 1028287 ']' 00:17:23.060 08:09:51 -- nvmf/common.sh@478 -- # killprocess 1028287 00:17:23.060 08:09:51 -- common/autotest_common.sh@926 -- # '[' -z 1028287 ']' 00:17:23.060 08:09:51 -- common/autotest_common.sh@930 -- # kill -0 1028287 00:17:23.060 08:09:51 -- common/autotest_common.sh@931 -- # uname 00:17:23.060 08:09:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:23.060 08:09:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1028287 00:17:23.060 08:09:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:23.060 08:09:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:23.060 08:09:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1028287' 00:17:23.060 killing process with pid 1028287 00:17:23.060 08:09:51 -- common/autotest_common.sh@945 -- # kill 1028287 00:17:23.060 08:09:51 -- common/autotest_common.sh@950 -- # wait 1028287 00:17:23.060 08:09:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:23.060 08:09:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:23.060 08:09:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:23.060 08:09:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:23.060 08:09:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:23.060 08:09:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.060 08:09:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.060 08:09:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.632 08:09:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:23.632 00:17:23.632 real 0m21.837s 00:17:23.632 user 0m25.313s 00:17:23.632 sys 0m6.443s 00:17:23.632 08:09:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:23.632 08:09:54 -- common/autotest_common.sh@10 -- # set +x 00:17:23.632 ************************************ 00:17:23.632 END TEST nvmf_queue_depth 00:17:23.632 ************************************ 00:17:23.632 08:09:54 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:23.632 08:09:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:23.632 08:09:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:23.632 08:09:54 -- common/autotest_common.sh@10 -- # set +x 00:17:23.632 ************************************ 00:17:23.632 START TEST nvmf_multipath 00:17:23.632 ************************************ 00:17:23.632 08:09:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:23.632 * Looking for test storage... 00:17:23.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:23.632 08:09:54 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:23.632 08:09:54 -- nvmf/common.sh@7 -- # uname -s 00:17:23.632 08:09:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.632 08:09:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.632 08:09:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.632 08:09:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.632 08:09:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.632 08:09:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.632 08:09:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.632 08:09:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.632 08:09:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.632 08:09:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.632 08:09:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:23.632 08:09:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:23.632 08:09:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.632 08:09:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.632 08:09:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:23.632 08:09:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:23.632 08:09:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.632 08:09:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.632 08:09:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.632 08:09:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.632 08:09:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.632 08:09:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.632 08:09:54 -- paths/export.sh@5 -- # export PATH 00:17:23.632 08:09:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.632 08:09:54 -- nvmf/common.sh@46 -- # : 0 00:17:23.632 08:09:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:23.632 08:09:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:23.632 08:09:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:23.632 08:09:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.632 08:09:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.632 08:09:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:23.632 08:09:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:23.632 08:09:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:23.632 08:09:54 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:23.632 08:09:54 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:23.632 08:09:54 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:23.632 08:09:54 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.632 08:09:54 -- target/multipath.sh@43 -- # nvmftestinit 00:17:23.632 08:09:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:23.632 08:09:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.632 08:09:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:23.632 08:09:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:23.632 08:09:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:23.632 08:09:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.632 08:09:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.632 08:09:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.632 08:09:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:23.632 08:09:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:23.632 08:09:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:23.632 08:09:54 -- common/autotest_common.sh@10 -- # set +x 00:17:31.773 08:10:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:31.773 08:10:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:31.773 08:10:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:31.773 08:10:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:31.773 08:10:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:31.774 08:10:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:31.774 08:10:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:31.774 08:10:01 -- nvmf/common.sh@294 -- # net_devs=() 00:17:31.774 08:10:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:31.774 08:10:01 -- nvmf/common.sh@295 -- # e810=() 00:17:31.774 08:10:01 -- nvmf/common.sh@295 -- # local -ga e810 00:17:31.774 08:10:01 -- nvmf/common.sh@296 -- # x722=() 00:17:31.774 08:10:01 -- nvmf/common.sh@296 -- # local -ga x722 00:17:31.774 08:10:01 -- nvmf/common.sh@297 -- # mlx=() 00:17:31.774 08:10:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:31.774 08:10:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:31.774 08:10:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:31.774 08:10:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:31.774 08:10:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:31.774 08:10:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:31.774 08:10:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:31.774 08:10:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:31.774 08:10:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:31.774 08:10:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:31.774 08:10:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:31.774 08:10:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:31.774 08:10:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:31.774 08:10:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:31.774 08:10:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:31.774 08:10:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:31.774 08:10:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:31.774 08:10:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:31.774 08:10:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:31.774 08:10:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:31.774 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:31.774 08:10:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:31.774 08:10:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:31.774 08:10:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.774 08:10:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.774 08:10:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:31.774 08:10:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:31.774 08:10:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:31.774 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:31.774 08:10:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:31.774 08:10:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:31.774 08:10:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.774 08:10:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.774 08:10:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:31.774 08:10:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:31.774 08:10:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:31.774 08:10:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:31.774 08:10:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:31.774 08:10:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.774 08:10:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:31.774 08:10:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.774 08:10:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:31.774 Found net devices under 0000:31:00.0: cvl_0_0 00:17:31.774 08:10:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.774 08:10:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:31.774 08:10:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.774 08:10:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:31.774 08:10:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.774 08:10:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:31.774 Found net devices under 0000:31:00.1: cvl_0_1 00:17:31.774 08:10:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.774 08:10:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:31.774 08:10:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:31.774 08:10:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:31.774 08:10:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:31.774 08:10:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:31.774 08:10:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:31.774 08:10:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:31.774 08:10:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:31.774 08:10:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:31.774 08:10:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:31.774 08:10:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:31.774 08:10:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:31.774 08:10:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:31.774 08:10:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:31.774 08:10:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:31.774 08:10:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:31.774 08:10:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:31.774 08:10:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:31.774 08:10:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:31.774 08:10:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:31.774 08:10:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:31.774 08:10:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:31.774 08:10:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:31.774 08:10:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:31.774 08:10:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:31.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:31.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:17:31.774 00:17:31.774 --- 10.0.0.2 ping statistics --- 00:17:31.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.774 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:17:31.774 08:10:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:31.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:31.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:17:31.774 00:17:31.774 --- 10.0.0.1 ping statistics --- 00:17:31.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.774 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:17:31.774 08:10:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:31.774 08:10:01 -- nvmf/common.sh@410 -- # return 0 00:17:31.774 08:10:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:31.774 08:10:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:31.774 08:10:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:31.774 08:10:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:31.774 08:10:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:31.774 08:10:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:31.774 08:10:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:31.774 08:10:01 -- target/multipath.sh@45 -- # '[' -z ']' 00:17:31.774 08:10:01 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:31.774 only one NIC for nvmf test 00:17:31.774 08:10:01 -- target/multipath.sh@47 -- # nvmftestfini 00:17:31.774 08:10:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:31.774 08:10:01 -- nvmf/common.sh@116 -- # sync 00:17:31.774 08:10:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:31.774 08:10:01 -- nvmf/common.sh@119 -- # set +e 00:17:31.774 08:10:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:31.774 08:10:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:31.774 rmmod nvme_tcp 00:17:31.774 rmmod nvme_fabrics 00:17:31.774 rmmod nvme_keyring 00:17:31.774 08:10:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:31.774 08:10:01 -- nvmf/common.sh@123 -- # set -e 00:17:31.774 08:10:01 -- nvmf/common.sh@124 -- # return 0 00:17:31.774 08:10:01 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:17:31.774 08:10:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:31.774 08:10:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:31.774 08:10:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:31.774 08:10:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:31.774 08:10:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:31.774 08:10:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.774 08:10:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:31.774 08:10:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.159 08:10:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:33.159 08:10:03 -- target/multipath.sh@48 -- # exit 0 00:17:33.159 08:10:03 -- target/multipath.sh@1 -- # nvmftestfini 00:17:33.159 08:10:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:33.159 08:10:03 -- nvmf/common.sh@116 -- # sync 00:17:33.159 08:10:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:33.159 08:10:03 -- nvmf/common.sh@119 -- # set +e 00:17:33.159 08:10:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:33.159 08:10:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:33.159 08:10:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:33.159 08:10:03 -- nvmf/common.sh@123 -- # set -e 00:17:33.159 08:10:03 -- nvmf/common.sh@124 -- # return 0 00:17:33.159 08:10:03 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:17:33.159 08:10:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:33.159 08:10:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:33.159 08:10:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:33.159 08:10:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:33.159 08:10:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:33.159 08:10:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.159 08:10:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.159 08:10:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.159 08:10:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:33.159 00:17:33.159 real 0m9.574s 00:17:33.159 user 0m2.089s 00:17:33.159 sys 0m5.374s 00:17:33.159 08:10:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:33.159 08:10:03 -- common/autotest_common.sh@10 -- # set +x 00:17:33.159 ************************************ 00:17:33.159 END TEST nvmf_multipath 00:17:33.159 ************************************ 00:17:33.159 08:10:03 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:33.159 08:10:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:33.159 08:10:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:33.159 08:10:03 -- common/autotest_common.sh@10 -- # set +x 00:17:33.159 ************************************ 00:17:33.159 START TEST nvmf_zcopy 00:17:33.159 ************************************ 00:17:33.159 08:10:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:33.159 * Looking for test storage... 00:17:33.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:33.159 08:10:03 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.159 08:10:03 -- nvmf/common.sh@7 -- # uname -s 00:17:33.421 08:10:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.421 08:10:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.421 08:10:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.421 08:10:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.421 08:10:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.421 08:10:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.421 08:10:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.421 08:10:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.421 08:10:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.421 08:10:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.421 08:10:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:33.421 08:10:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:33.421 08:10:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.421 08:10:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.421 08:10:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:33.421 08:10:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:33.421 08:10:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.421 08:10:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.421 08:10:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.421 08:10:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.421 08:10:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.421 08:10:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.421 08:10:03 -- paths/export.sh@5 -- # export PATH 00:17:33.421 08:10:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.421 08:10:03 -- nvmf/common.sh@46 -- # : 0 00:17:33.421 08:10:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:33.421 08:10:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:33.421 08:10:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:33.421 08:10:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.421 08:10:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.421 08:10:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:33.421 08:10:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:33.421 08:10:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:33.421 08:10:03 -- target/zcopy.sh@12 -- # nvmftestinit 00:17:33.421 08:10:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:33.421 08:10:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.421 08:10:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:33.421 08:10:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:33.421 08:10:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:33.421 08:10:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.421 08:10:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.421 08:10:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.421 08:10:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:33.421 08:10:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:33.421 08:10:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:33.421 08:10:03 -- common/autotest_common.sh@10 -- # set +x 00:17:41.566 08:10:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:41.566 08:10:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:41.566 08:10:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:41.566 08:10:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:41.566 08:10:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:41.566 08:10:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:41.566 08:10:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:41.566 08:10:10 -- nvmf/common.sh@294 -- # net_devs=() 00:17:41.566 08:10:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:41.566 08:10:10 -- nvmf/common.sh@295 -- # e810=() 00:17:41.566 08:10:10 -- nvmf/common.sh@295 -- # local -ga e810 00:17:41.566 08:10:10 -- nvmf/common.sh@296 -- # x722=() 00:17:41.566 08:10:10 -- nvmf/common.sh@296 -- # local -ga x722 00:17:41.566 08:10:10 -- nvmf/common.sh@297 -- # mlx=() 00:17:41.566 08:10:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:41.566 08:10:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:41.566 08:10:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:41.566 08:10:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:41.566 08:10:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:41.566 08:10:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:41.566 08:10:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:41.566 08:10:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:41.566 08:10:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:41.566 08:10:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:41.567 08:10:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:41.567 08:10:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:41.567 08:10:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:41.567 08:10:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:41.567 08:10:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:41.567 08:10:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:41.567 08:10:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:41.567 08:10:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:41.567 08:10:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:41.567 08:10:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:41.567 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:41.567 08:10:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:41.567 08:10:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:41.567 08:10:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.567 08:10:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.567 08:10:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:41.567 08:10:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:41.567 08:10:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:41.567 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:41.567 08:10:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:41.567 08:10:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:41.567 08:10:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.567 08:10:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.567 08:10:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:41.567 08:10:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:41.567 08:10:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:41.567 08:10:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:41.567 08:10:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:41.567 08:10:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.567 08:10:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:41.567 08:10:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.567 08:10:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:41.567 Found net devices under 0000:31:00.0: cvl_0_0 00:17:41.567 08:10:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.567 08:10:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:41.567 08:10:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.567 08:10:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:41.567 08:10:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.567 08:10:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:41.567 Found net devices under 0000:31:00.1: cvl_0_1 00:17:41.567 08:10:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.567 08:10:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:41.567 08:10:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:41.567 08:10:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:41.567 08:10:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:41.567 08:10:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:41.567 08:10:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.567 08:10:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:41.567 08:10:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:41.567 08:10:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:41.567 08:10:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:41.567 08:10:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:41.567 08:10:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:41.567 08:10:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:41.567 08:10:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.567 08:10:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:41.567 08:10:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:41.567 08:10:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:41.567 08:10:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:41.567 08:10:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:41.567 08:10:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:41.567 08:10:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:41.567 08:10:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:41.567 08:10:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:41.567 08:10:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:41.567 08:10:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:41.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:17:41.567 00:17:41.567 --- 10.0.0.2 ping statistics --- 00:17:41.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.567 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:17:41.567 08:10:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:41.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:17:41.567 00:17:41.567 --- 10.0.0.1 ping statistics --- 00:17:41.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.567 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:17:41.567 08:10:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.567 08:10:11 -- nvmf/common.sh@410 -- # return 0 00:17:41.567 08:10:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:41.567 08:10:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.567 08:10:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:41.567 08:10:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:41.567 08:10:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.567 08:10:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:41.567 08:10:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:41.567 08:10:11 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:41.567 08:10:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:41.567 08:10:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:41.567 08:10:11 -- common/autotest_common.sh@10 -- # set +x 00:17:41.567 08:10:11 -- nvmf/common.sh@469 -- # nvmfpid=1039246 00:17:41.567 08:10:11 -- nvmf/common.sh@470 -- # waitforlisten 1039246 00:17:41.567 08:10:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:41.567 08:10:11 -- common/autotest_common.sh@819 -- # '[' -z 1039246 ']' 00:17:41.567 08:10:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.567 08:10:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:41.567 08:10:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.567 08:10:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:41.567 08:10:11 -- common/autotest_common.sh@10 -- # set +x 00:17:41.567 [2024-06-11 08:10:11.255531] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:41.567 [2024-06-11 08:10:11.255600] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.567 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.567 [2024-06-11 08:10:11.346032] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.567 [2024-06-11 08:10:11.437751] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:41.567 [2024-06-11 08:10:11.437905] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.567 [2024-06-11 08:10:11.437915] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.567 [2024-06-11 08:10:11.437922] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.567 [2024-06-11 08:10:11.437949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.567 08:10:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:41.567 08:10:12 -- common/autotest_common.sh@852 -- # return 0 00:17:41.567 08:10:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:41.567 08:10:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:41.567 08:10:12 -- common/autotest_common.sh@10 -- # set +x 00:17:41.568 08:10:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.568 08:10:12 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:41.568 08:10:12 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:41.568 08:10:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:41.568 08:10:12 -- common/autotest_common.sh@10 -- # set +x 00:17:41.568 [2024-06-11 08:10:12.077120] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.568 08:10:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:41.568 08:10:12 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:41.568 08:10:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:41.568 08:10:12 -- common/autotest_common.sh@10 -- # set +x 00:17:41.568 08:10:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:41.568 08:10:12 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:41.568 08:10:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:41.568 08:10:12 -- common/autotest_common.sh@10 -- # set +x 00:17:41.568 [2024-06-11 08:10:12.101364] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.568 08:10:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:41.568 08:10:12 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:41.568 08:10:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:41.568 08:10:12 -- common/autotest_common.sh@10 -- # set +x 00:17:41.568 08:10:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:41.568 08:10:12 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:41.568 08:10:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:41.568 08:10:12 -- common/autotest_common.sh@10 -- # set +x 00:17:41.568 malloc0 00:17:41.568 08:10:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:41.568 08:10:12 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:41.568 08:10:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:41.568 08:10:12 -- common/autotest_common.sh@10 -- # set +x 00:17:41.568 08:10:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:41.568 08:10:12 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:41.568 08:10:12 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:41.568 08:10:12 -- nvmf/common.sh@520 -- # config=() 00:17:41.568 08:10:12 -- nvmf/common.sh@520 -- # local subsystem config 00:17:41.568 08:10:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:41.568 08:10:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:41.568 { 00:17:41.568 "params": { 00:17:41.568 "name": "Nvme$subsystem", 00:17:41.568 "trtype": "$TEST_TRANSPORT", 00:17:41.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:41.568 "adrfam": "ipv4", 00:17:41.568 "trsvcid": "$NVMF_PORT", 00:17:41.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:41.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:41.568 "hdgst": ${hdgst:-false}, 00:17:41.568 "ddgst": ${ddgst:-false} 00:17:41.568 }, 00:17:41.568 "method": "bdev_nvme_attach_controller" 00:17:41.568 } 00:17:41.568 EOF 00:17:41.568 )") 00:17:41.568 08:10:12 -- nvmf/common.sh@542 -- # cat 00:17:41.568 08:10:12 -- nvmf/common.sh@544 -- # jq . 00:17:41.568 08:10:12 -- nvmf/common.sh@545 -- # IFS=, 00:17:41.568 08:10:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:41.568 "params": { 00:17:41.568 "name": "Nvme1", 00:17:41.568 "trtype": "tcp", 00:17:41.568 "traddr": "10.0.0.2", 00:17:41.568 "adrfam": "ipv4", 00:17:41.568 "trsvcid": "4420", 00:17:41.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:41.568 "hdgst": false, 00:17:41.568 "ddgst": false 00:17:41.568 }, 00:17:41.568 "method": "bdev_nvme_attach_controller" 00:17:41.568 }' 00:17:41.568 [2024-06-11 08:10:12.196599] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:41.568 [2024-06-11 08:10:12.196669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1039500 ] 00:17:41.829 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.829 [2024-06-11 08:10:12.263565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.829 [2024-06-11 08:10:12.335196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.090 Running I/O for 10 seconds... 00:17:52.092 00:17:52.092 Latency(us) 00:17:52.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.092 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:52.092 Verification LBA range: start 0x0 length 0x1000 00:17:52.092 Nvme1n1 : 10.01 14061.35 109.85 0.00 0.00 9074.80 1153.71 19988.48 00:17:52.092 =================================================================================================================== 00:17:52.092 Total : 14061.35 109.85 0.00 0.00 9074.80 1153.71 19988.48 00:17:52.353 08:10:22 -- target/zcopy.sh@39 -- # perfpid=1041528 00:17:52.353 08:10:22 -- target/zcopy.sh@41 -- # xtrace_disable 00:17:52.353 08:10:22 -- common/autotest_common.sh@10 -- # set +x 00:17:52.353 08:10:22 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:52.353 08:10:22 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:52.353 08:10:22 -- nvmf/common.sh@520 -- # config=() 00:17:52.353 08:10:22 -- nvmf/common.sh@520 -- # local subsystem config 00:17:52.353 08:10:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:52.353 08:10:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:52.353 { 00:17:52.353 "params": { 00:17:52.353 "name": "Nvme$subsystem", 00:17:52.353 "trtype": "$TEST_TRANSPORT", 00:17:52.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:52.353 "adrfam": "ipv4", 00:17:52.353 "trsvcid": "$NVMF_PORT", 00:17:52.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:52.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:52.353 "hdgst": ${hdgst:-false}, 00:17:52.353 "ddgst": ${ddgst:-false} 00:17:52.353 }, 00:17:52.353 "method": "bdev_nvme_attach_controller" 00:17:52.353 } 00:17:52.353 EOF 00:17:52.353 )") 00:17:52.353 08:10:22 -- nvmf/common.sh@542 -- # cat 00:17:52.353 [2024-06-11 08:10:22.770233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.353 [2024-06-11 08:10:22.770260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.353 08:10:22 -- nvmf/common.sh@544 -- # jq . 00:17:52.353 08:10:22 -- nvmf/common.sh@545 -- # IFS=, 00:17:52.353 08:10:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:52.353 "params": { 00:17:52.353 "name": "Nvme1", 00:17:52.353 "trtype": "tcp", 00:17:52.353 "traddr": "10.0.0.2", 00:17:52.353 "adrfam": "ipv4", 00:17:52.353 "trsvcid": "4420", 00:17:52.353 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.353 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:52.353 "hdgst": false, 00:17:52.353 "ddgst": false 00:17:52.353 }, 00:17:52.353 "method": "bdev_nvme_attach_controller" 00:17:52.353 }' 00:17:52.353 [2024-06-11 08:10:22.782231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.353 [2024-06-11 08:10:22.782239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.353 [2024-06-11 08:10:22.794259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.353 [2024-06-11 08:10:22.794266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.353 [2024-06-11 08:10:22.806287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.353 [2024-06-11 08:10:22.806294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.353 [2024-06-11 08:10:22.807944] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:52.353 [2024-06-11 08:10:22.807990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1041528 ] 00:17:52.353 [2024-06-11 08:10:22.818318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.353 [2024-06-11 08:10:22.818325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.353 [2024-06-11 08:10:22.830349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.353 [2024-06-11 08:10:22.830356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.353 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.353 [2024-06-11 08:10:22.842382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.353 [2024-06-11 08:10:22.842389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.353 [2024-06-11 08:10:22.854412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.353 [2024-06-11 08:10:22.854418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.353 [2024-06-11 08:10:22.866447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.353 [2024-06-11 08:10:22.866453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.353 [2024-06-11 08:10:22.866686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.353 [2024-06-11 08:10:22.878479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.353 [2024-06-11 08:10:22.878487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.353 [2024-06-11 08:10:22.890507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.353 [2024-06-11 08:10:22.890514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.353 [2024-06-11 08:10:22.902540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.353 [2024-06-11 08:10:22.902551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.353 [2024-06-11 08:10:22.914570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.353 [2024-06-11 08:10:22.914579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.353 [2024-06-11 08:10:22.926600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.353 [2024-06-11 08:10:22.926608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.353 [2024-06-11 08:10:22.929433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.353 [2024-06-11 08:10:22.938631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.353 [2024-06-11 08:10:22.938639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.353 [2024-06-11 08:10:22.950666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.353 [2024-06-11 08:10:22.950679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.353 [2024-06-11 08:10:22.962697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.353 [2024-06-11 08:10:22.962706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.353 [2024-06-11 08:10:22.974726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.353 [2024-06-11 08:10:22.974734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.353 [2024-06-11 08:10:22.986756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.354 [2024-06-11 08:10:22.986764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.354 [2024-06-11 08:10:22.998789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.354 [2024-06-11 08:10:22.998796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.615 [2024-06-11 08:10:23.010829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.615 [2024-06-11 08:10:23.010843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.615 [2024-06-11 08:10:23.022857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.615 [2024-06-11 08:10:23.022868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.615 [2024-06-11 08:10:23.034891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.615 [2024-06-11 08:10:23.034902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.615 [2024-06-11 08:10:23.046924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.615 [2024-06-11 08:10:23.046935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.615 [2024-06-11 08:10:23.058953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.615 [2024-06-11 08:10:23.058961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.615 [2024-06-11 08:10:23.104893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.615 [2024-06-11 08:10:23.104906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.615 [2024-06-11 08:10:23.115102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.615 [2024-06-11 08:10:23.115110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.615 Running I/O for 5 seconds... 00:17:52.615 [2024-06-11 08:10:23.129407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.615 [2024-06-11 08:10:23.129423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.615 [2024-06-11 08:10:23.142866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.615 [2024-06-11 08:10:23.142882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.615 [2024-06-11 08:10:23.155843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.615 [2024-06-11 08:10:23.155858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.615 [2024-06-11 08:10:23.168915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.615 [2024-06-11 08:10:23.168932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.615 [2024-06-11 08:10:23.181556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.615 [2024-06-11 08:10:23.181571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.615 [2024-06-11 08:10:23.194394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.615 [2024-06-11 08:10:23.194414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.615 [2024-06-11 08:10:23.207412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.615 [2024-06-11 08:10:23.207428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.615 [2024-06-11 08:10:23.220225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.615 [2024-06-11 08:10:23.220240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.615 [2024-06-11 08:10:23.232834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.615 [2024-06-11 08:10:23.232849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.615 [2024-06-11 08:10:23.245733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.615 [2024-06-11 08:10:23.245748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.615 [2024-06-11 08:10:23.258895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.615 [2024-06-11 08:10:23.258910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.875 [2024-06-11 08:10:23.271727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.875 [2024-06-11 08:10:23.271742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.875 [2024-06-11 08:10:23.284622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.875 [2024-06-11 08:10:23.284636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.875 [2024-06-11 08:10:23.297499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.875 [2024-06-11 08:10:23.297513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.875 [2024-06-11 08:10:23.310326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.875 [2024-06-11 08:10:23.310342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.875 [2024-06-11 08:10:23.323203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.875 [2024-06-11 08:10:23.323218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.875 [2024-06-11 08:10:23.335961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.875 [2024-06-11 08:10:23.335975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.875 [2024-06-11 08:10:23.348906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.875 [2024-06-11 08:10:23.348921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.875 [2024-06-11 08:10:23.361421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.875 [2024-06-11 08:10:23.361436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.875 [2024-06-11 08:10:23.373894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.875 [2024-06-11 08:10:23.373909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.875 [2024-06-11 08:10:23.386799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.875 [2024-06-11 08:10:23.386813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.875 [2024-06-11 08:10:23.399848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.875 [2024-06-11 08:10:23.399863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.875 [2024-06-11 08:10:23.413194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.875 [2024-06-11 08:10:23.413210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.875 [2024-06-11 08:10:23.426110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.875 [2024-06-11 08:10:23.426125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.875 [2024-06-11 08:10:23.438472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.875 [2024-06-11 08:10:23.438490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.875 [2024-06-11 08:10:23.451096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.875 [2024-06-11 08:10:23.451110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.875 [2024-06-11 08:10:23.463916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.875 [2024-06-11 08:10:23.463931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.875 [2024-06-11 08:10:23.476478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.875 [2024-06-11 08:10:23.476494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.876 [2024-06-11 08:10:23.489452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.876 [2024-06-11 08:10:23.489466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.876 [2024-06-11 08:10:23.501900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.876 [2024-06-11 08:10:23.501914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:52.876 [2024-06-11 08:10:23.515052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:52.876 [2024-06-11 08:10:23.515067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.136 [2024-06-11 08:10:23.528119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.136 [2024-06-11 08:10:23.528134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.136 [2024-06-11 08:10:23.540643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.136 [2024-06-11 08:10:23.540658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.136 [2024-06-11 08:10:23.553922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.136 [2024-06-11 08:10:23.553938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.136 [2024-06-11 08:10:23.566029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.136 [2024-06-11 08:10:23.566044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.136 [2024-06-11 08:10:23.578772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.136 [2024-06-11 08:10:23.578786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.136 [2024-06-11 08:10:23.591927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.136 [2024-06-11 08:10:23.591942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.136 [2024-06-11 08:10:23.604646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.136 [2024-06-11 08:10:23.604662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.136 [2024-06-11 08:10:23.617149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.136 [2024-06-11 08:10:23.617164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.136 [2024-06-11 08:10:23.630166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.136 [2024-06-11 08:10:23.630181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.136 [2024-06-11 08:10:23.642932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.136 [2024-06-11 08:10:23.642947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.136 [2024-06-11 08:10:23.655851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.136 [2024-06-11 08:10:23.655866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.136 [2024-06-11 08:10:23.668822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.136 [2024-06-11 08:10:23.668836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.136 [2024-06-11 08:10:23.681127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.136 [2024-06-11 08:10:23.681142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.136 [2024-06-11 08:10:23.694477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.136 [2024-06-11 08:10:23.694492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.136 [2024-06-11 08:10:23.707495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.136 [2024-06-11 08:10:23.707510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.136 [2024-06-11 08:10:23.720267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.136 [2024-06-11 08:10:23.720281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.136 [2024-06-11 08:10:23.733251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.136 [2024-06-11 08:10:23.733266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.136 [2024-06-11 08:10:23.745957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.136 [2024-06-11 08:10:23.745972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.136 [2024-06-11 08:10:23.759228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.136 [2024-06-11 08:10:23.759242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.136 [2024-06-11 08:10:23.772430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.136 [2024-06-11 08:10:23.772449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.397 [2024-06-11 08:10:23.785160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.397 [2024-06-11 08:10:23.785175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.397 [2024-06-11 08:10:23.797765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.397 [2024-06-11 08:10:23.797779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.397 [2024-06-11 08:10:23.810460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.397 [2024-06-11 08:10:23.810475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.397 [2024-06-11 08:10:23.823194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.397 [2024-06-11 08:10:23.823209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.397 [2024-06-11 08:10:23.835802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.397 [2024-06-11 08:10:23.835817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.397 [2024-06-11 08:10:23.848750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.397 [2024-06-11 08:10:23.848764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.397 [2024-06-11 08:10:23.861502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.397 [2024-06-11 08:10:23.861516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.397 [2024-06-11 08:10:23.874194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.397 [2024-06-11 08:10:23.874209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.397 [2024-06-11 08:10:23.887037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.397 [2024-06-11 08:10:23.887051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.397 [2024-06-11 08:10:23.899697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.397 [2024-06-11 08:10:23.899711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.397 [2024-06-11 08:10:23.912032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.397 [2024-06-11 08:10:23.912047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.397 [2024-06-11 08:10:23.924720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.397 [2024-06-11 08:10:23.924735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.397 [2024-06-11 08:10:23.937703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.397 [2024-06-11 08:10:23.937718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.397 [2024-06-11 08:10:23.950631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.397 [2024-06-11 08:10:23.950645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.397 [2024-06-11 08:10:23.963617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.397 [2024-06-11 08:10:23.963632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.397 [2024-06-11 08:10:23.976332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.397 [2024-06-11 08:10:23.976346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.397 [2024-06-11 08:10:23.989496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.397 [2024-06-11 08:10:23.989511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.397 [2024-06-11 08:10:24.002029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.397 [2024-06-11 08:10:24.002044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.397 [2024-06-11 08:10:24.015241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.397 [2024-06-11 08:10:24.015256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.397 [2024-06-11 08:10:24.027958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.397 [2024-06-11 08:10:24.027972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.397 [2024-06-11 08:10:24.040888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.397 [2024-06-11 08:10:24.040903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.658 [2024-06-11 08:10:24.053555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.658 [2024-06-11 08:10:24.053570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.658 [2024-06-11 08:10:24.066548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.658 [2024-06-11 08:10:24.066562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.658 [2024-06-11 08:10:24.079714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.658 [2024-06-11 08:10:24.079729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.658 [2024-06-11 08:10:24.092920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.658 [2024-06-11 08:10:24.092935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.658 [2024-06-11 08:10:24.105224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.658 [2024-06-11 08:10:24.105238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.658 [2024-06-11 08:10:24.118123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.658 [2024-06-11 08:10:24.118138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.658 [2024-06-11 08:10:24.130672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.658 [2024-06-11 08:10:24.130687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.658 [2024-06-11 08:10:24.143770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.658 [2024-06-11 08:10:24.143784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.658 [2024-06-11 08:10:24.156178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.658 [2024-06-11 08:10:24.156193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.658 [2024-06-11 08:10:24.168922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.658 [2024-06-11 08:10:24.168936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.658 [2024-06-11 08:10:24.181966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.658 [2024-06-11 08:10:24.181980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.658 [2024-06-11 08:10:24.194907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.658 [2024-06-11 08:10:24.194921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.658 [2024-06-11 08:10:24.207485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.658 [2024-06-11 08:10:24.207499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.658 [2024-06-11 08:10:24.220162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.658 [2024-06-11 08:10:24.220176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.658 [2024-06-11 08:10:24.233257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.658 [2024-06-11 08:10:24.233272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.658 [2024-06-11 08:10:24.246248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.658 [2024-06-11 08:10:24.246263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.658 [2024-06-11 08:10:24.259111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.658 [2024-06-11 08:10:24.259125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.658 [2024-06-11 08:10:24.272181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.658 [2024-06-11 08:10:24.272195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.658 [2024-06-11 08:10:24.285242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.658 [2024-06-11 08:10:24.285257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.658 [2024-06-11 08:10:24.298395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.658 [2024-06-11 08:10:24.298409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.919 [2024-06-11 08:10:24.311228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.919 [2024-06-11 08:10:24.311243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.919 [2024-06-11 08:10:24.324230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.919 [2024-06-11 08:10:24.324245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.919 [2024-06-11 08:10:24.336463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.919 [2024-06-11 08:10:24.336478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.919 [2024-06-11 08:10:24.349356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.919 [2024-06-11 08:10:24.349370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.919 [2024-06-11 08:10:24.362435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.919 [2024-06-11 08:10:24.362454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.919 [2024-06-11 08:10:24.375001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.919 [2024-06-11 08:10:24.375015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.919 [2024-06-11 08:10:24.387627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.919 [2024-06-11 08:10:24.387641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.919 [2024-06-11 08:10:24.400461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.919 [2024-06-11 08:10:24.400476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.919 [2024-06-11 08:10:24.413133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.919 [2024-06-11 08:10:24.413148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.919 [2024-06-11 08:10:24.425854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.919 [2024-06-11 08:10:24.425868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.919 [2024-06-11 08:10:24.438679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.919 [2024-06-11 08:10:24.438693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.919 [2024-06-11 08:10:24.451708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.919 [2024-06-11 08:10:24.451722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.919 [2024-06-11 08:10:24.464541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.919 [2024-06-11 08:10:24.464555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.919 [2024-06-11 08:10:24.477585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.919 [2024-06-11 08:10:24.477599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.919 [2024-06-11 08:10:24.490607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.919 [2024-06-11 08:10:24.490621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.919 [2024-06-11 08:10:24.503222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.919 [2024-06-11 08:10:24.503236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.919 [2024-06-11 08:10:24.516245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.919 [2024-06-11 08:10:24.516259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.919 [2024-06-11 08:10:24.528794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.919 [2024-06-11 08:10:24.528809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.919 [2024-06-11 08:10:24.541698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.919 [2024-06-11 08:10:24.541713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:53.919 [2024-06-11 08:10:24.554581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:53.919 [2024-06-11 08:10:24.554595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.181 [2024-06-11 08:10:24.567442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.181 [2024-06-11 08:10:24.567457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.181 [2024-06-11 08:10:24.580261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.181 [2024-06-11 08:10:24.580275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.181 [2024-06-11 08:10:24.592887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.181 [2024-06-11 08:10:24.592901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.181 [2024-06-11 08:10:24.605707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.181 [2024-06-11 08:10:24.605721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.181 [2024-06-11 08:10:24.618573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.181 [2024-06-11 08:10:24.618588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.181 [2024-06-11 08:10:24.631746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.181 [2024-06-11 08:10:24.631760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.181 [2024-06-11 08:10:24.644714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.181 [2024-06-11 08:10:24.644732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.181 [2024-06-11 08:10:24.657583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.181 [2024-06-11 08:10:24.657598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.181 [2024-06-11 08:10:24.670358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.181 [2024-06-11 08:10:24.670372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.181 [2024-06-11 08:10:24.683375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.181 [2024-06-11 08:10:24.683390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.181 [2024-06-11 08:10:24.695821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.181 [2024-06-11 08:10:24.695835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.181 [2024-06-11 08:10:24.709302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.181 [2024-06-11 08:10:24.709317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.181 [2024-06-11 08:10:24.722369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.181 [2024-06-11 08:10:24.722384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.181 [2024-06-11 08:10:24.735261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.181 [2024-06-11 08:10:24.735276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.181 [2024-06-11 08:10:24.748302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.181 [2024-06-11 08:10:24.748316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.181 [2024-06-11 08:10:24.761231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.181 [2024-06-11 08:10:24.761246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.181 [2024-06-11 08:10:24.774136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.181 [2024-06-11 08:10:24.774152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.181 [2024-06-11 08:10:24.786987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.181 [2024-06-11 08:10:24.787001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.181 [2024-06-11 08:10:24.799952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.181 [2024-06-11 08:10:24.799968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.181 [2024-06-11 08:10:24.813061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.181 [2024-06-11 08:10:24.813076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.181 [2024-06-11 08:10:24.826189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.181 [2024-06-11 08:10:24.826204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.443 [2024-06-11 08:10:24.838743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.443 [2024-06-11 08:10:24.838757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.443 [2024-06-11 08:10:24.851711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.443 [2024-06-11 08:10:24.851726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.443 [2024-06-11 08:10:24.864345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.443 [2024-06-11 08:10:24.864359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.443 [2024-06-11 08:10:24.877297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.443 [2024-06-11 08:10:24.877312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.443 [2024-06-11 08:10:24.890169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.443 [2024-06-11 08:10:24.890188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.443 [2024-06-11 08:10:24.902282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.443 [2024-06-11 08:10:24.902297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.443 [2024-06-11 08:10:24.914925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.443 [2024-06-11 08:10:24.914940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.443 [2024-06-11 08:10:24.927528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.443 [2024-06-11 08:10:24.927543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.443 [2024-06-11 08:10:24.940470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.443 [2024-06-11 08:10:24.940484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.443 [2024-06-11 08:10:24.952914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.443 [2024-06-11 08:10:24.952928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.443 [2024-06-11 08:10:24.965978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.443 [2024-06-11 08:10:24.965993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.443 [2024-06-11 08:10:24.978825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.443 [2024-06-11 08:10:24.978839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.443 [2024-06-11 08:10:24.991377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.443 [2024-06-11 08:10:24.991391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.443 [2024-06-11 08:10:25.004202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.443 [2024-06-11 08:10:25.004217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.443 [2024-06-11 08:10:25.017457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.443 [2024-06-11 08:10:25.017472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.443 [2024-06-11 08:10:25.030645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.443 [2024-06-11 08:10:25.030660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.443 [2024-06-11 08:10:25.042884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.443 [2024-06-11 08:10:25.042898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.443 [2024-06-11 08:10:25.056024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.443 [2024-06-11 08:10:25.056039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.443 [2024-06-11 08:10:25.068858] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.443 [2024-06-11 08:10:25.068873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.443 [2024-06-11 08:10:25.081764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.443 [2024-06-11 08:10:25.081779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.704 [2024-06-11 08:10:25.094217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.704 [2024-06-11 08:10:25.094232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.704 [2024-06-11 08:10:25.107309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.704 [2024-06-11 08:10:25.107325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.704 [2024-06-11 08:10:25.120192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.704 [2024-06-11 08:10:25.120206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.704 [2024-06-11 08:10:25.132753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.704 [2024-06-11 08:10:25.132775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.704 [2024-06-11 08:10:25.145885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.704 [2024-06-11 08:10:25.145900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.704 [2024-06-11 08:10:25.158695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.704 [2024-06-11 08:10:25.158710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.704 [2024-06-11 08:10:25.171369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.704 [2024-06-11 08:10:25.171383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.704 [2024-06-11 08:10:25.184222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.704 [2024-06-11 08:10:25.184237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.704 [2024-06-11 08:10:25.197368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.704 [2024-06-11 08:10:25.197382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.704 [2024-06-11 08:10:25.210258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.704 [2024-06-11 08:10:25.210273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.704 [2024-06-11 08:10:25.223294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.704 [2024-06-11 08:10:25.223309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.704 [2024-06-11 08:10:25.235779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.704 [2024-06-11 08:10:25.235793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.704 [2024-06-11 08:10:25.248844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.704 [2024-06-11 08:10:25.248858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.704 [2024-06-11 08:10:25.261960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.704 [2024-06-11 08:10:25.261975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.704 [2024-06-11 08:10:25.274081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.704 [2024-06-11 08:10:25.274096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.704 [2024-06-11 08:10:25.287031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.704 [2024-06-11 08:10:25.287046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.704 [2024-06-11 08:10:25.299388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.704 [2024-06-11 08:10:25.299403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.704 [2024-06-11 08:10:25.311912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.704 [2024-06-11 08:10:25.311927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.704 [2024-06-11 08:10:25.325128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.704 [2024-06-11 08:10:25.325142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.704 [2024-06-11 08:10:25.337909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.704 [2024-06-11 08:10:25.337923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.966 [2024-06-11 08:10:25.351109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.966 [2024-06-11 08:10:25.351124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.966 [2024-06-11 08:10:25.363879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.966 [2024-06-11 08:10:25.363893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.966 [2024-06-11 08:10:25.376615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.966 [2024-06-11 08:10:25.376633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.966 [2024-06-11 08:10:25.389255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.966 [2024-06-11 08:10:25.389270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.966 [2024-06-11 08:10:25.401695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.966 [2024-06-11 08:10:25.401710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.966 [2024-06-11 08:10:25.414460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.966 [2024-06-11 08:10:25.414475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.966 [2024-06-11 08:10:25.427233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.966 [2024-06-11 08:10:25.427247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.966 [2024-06-11 08:10:25.440415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.966 [2024-06-11 08:10:25.440429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.966 [2024-06-11 08:10:25.453298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.966 [2024-06-11 08:10:25.453313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.966 [2024-06-11 08:10:25.466073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.966 [2024-06-11 08:10:25.466088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.966 [2024-06-11 08:10:25.478840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.966 [2024-06-11 08:10:25.478855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.966 [2024-06-11 08:10:25.491526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.966 [2024-06-11 08:10:25.491540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.966 [2024-06-11 08:10:25.504392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.966 [2024-06-11 08:10:25.504407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.966 [2024-06-11 08:10:25.516940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.966 [2024-06-11 08:10:25.516954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.966 [2024-06-11 08:10:25.529939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.966 [2024-06-11 08:10:25.529953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.966 [2024-06-11 08:10:25.542626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.966 [2024-06-11 08:10:25.542640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.966 [2024-06-11 08:10:25.555391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.966 [2024-06-11 08:10:25.555405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.966 [2024-06-11 08:10:25.568372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.966 [2024-06-11 08:10:25.568386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.966 [2024-06-11 08:10:25.581226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.966 [2024-06-11 08:10:25.581241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.966 [2024-06-11 08:10:25.594185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.967 [2024-06-11 08:10:25.594200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:54.967 [2024-06-11 08:10:25.606302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:54.967 [2024-06-11 08:10:25.606317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.228 [2024-06-11 08:10:25.619153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.228 [2024-06-11 08:10:25.619168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.228 [2024-06-11 08:10:25.632218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.228 [2024-06-11 08:10:25.632232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.228 [2024-06-11 08:10:25.645122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.228 [2024-06-11 08:10:25.645136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.228 [2024-06-11 08:10:25.658412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.228 [2024-06-11 08:10:25.658426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.228 [2024-06-11 08:10:25.670872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.228 [2024-06-11 08:10:25.670886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.228 [2024-06-11 08:10:25.683900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.228 [2024-06-11 08:10:25.683914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.228 [2024-06-11 08:10:25.696907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.228 [2024-06-11 08:10:25.696921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.228 [2024-06-11 08:10:25.709978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.228 [2024-06-11 08:10:25.709993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.228 [2024-06-11 08:10:25.723111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.228 [2024-06-11 08:10:25.723125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.228 [2024-06-11 08:10:25.736224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.228 [2024-06-11 08:10:25.736238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.228 [2024-06-11 08:10:25.748959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.228 [2024-06-11 08:10:25.748973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.228 [2024-06-11 08:10:25.761474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.228 [2024-06-11 08:10:25.761488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.228 [2024-06-11 08:10:25.774187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.228 [2024-06-11 08:10:25.774202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.228 [2024-06-11 08:10:25.786822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.228 [2024-06-11 08:10:25.786837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.228 [2024-06-11 08:10:25.800125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.228 [2024-06-11 08:10:25.800140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.228 [2024-06-11 08:10:25.812919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.228 [2024-06-11 08:10:25.812934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.228 [2024-06-11 08:10:25.825879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.228 [2024-06-11 08:10:25.825893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.228 [2024-06-11 08:10:25.838925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.228 [2024-06-11 08:10:25.838939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.228 [2024-06-11 08:10:25.851770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.228 [2024-06-11 08:10:25.851784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.228 [2024-06-11 08:10:25.864735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.228 [2024-06-11 08:10:25.864750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-06-11 08:10:25.877594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-06-11 08:10:25.877608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-06-11 08:10:25.890533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-06-11 08:10:25.890548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-06-11 08:10:25.903260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-06-11 08:10:25.903274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-06-11 08:10:25.916440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-06-11 08:10:25.916454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-06-11 08:10:25.928991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-06-11 08:10:25.929005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-06-11 08:10:25.941794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-06-11 08:10:25.941808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-06-11 08:10:25.954301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-06-11 08:10:25.954315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-06-11 08:10:25.966674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-06-11 08:10:25.966689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-06-11 08:10:25.979466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-06-11 08:10:25.979480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-06-11 08:10:25.991807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-06-11 08:10:25.991821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-06-11 08:10:26.004285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-06-11 08:10:26.004299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-06-11 08:10:26.016871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-06-11 08:10:26.016885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-06-11 08:10:26.029731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-06-11 08:10:26.029745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-06-11 08:10:26.042211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-06-11 08:10:26.042225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-06-11 08:10:26.055356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-06-11 08:10:26.055371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-06-11 08:10:26.067757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-06-11 08:10:26.067772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-06-11 08:10:26.080449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-06-11 08:10:26.080463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-06-11 08:10:26.093420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-06-11 08:10:26.093434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-06-11 08:10:26.106159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-06-11 08:10:26.106174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-06-11 08:10:26.119270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-06-11 08:10:26.119285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.490 [2024-06-11 08:10:26.132037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.490 [2024-06-11 08:10:26.132052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.752 [2024-06-11 08:10:26.144707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.752 [2024-06-11 08:10:26.144721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.752 [2024-06-11 08:10:26.157492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.752 [2024-06-11 08:10:26.157507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.752 [2024-06-11 08:10:26.170096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.752 [2024-06-11 08:10:26.170110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.752 [2024-06-11 08:10:26.183074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.752 [2024-06-11 08:10:26.183088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.752 [2024-06-11 08:10:26.196031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.752 [2024-06-11 08:10:26.196045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.752 [2024-06-11 08:10:26.208896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.752 [2024-06-11 08:10:26.208910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.752 [2024-06-11 08:10:26.221446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.752 [2024-06-11 08:10:26.221461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.752 [2024-06-11 08:10:26.234347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.752 [2024-06-11 08:10:26.234362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.752 [2024-06-11 08:10:26.247454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.752 [2024-06-11 08:10:26.247468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.752 [2024-06-11 08:10:26.260142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.752 [2024-06-11 08:10:26.260156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.752 [2024-06-11 08:10:26.272823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.752 [2024-06-11 08:10:26.272838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.752 [2024-06-11 08:10:26.286163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.752 [2024-06-11 08:10:26.286177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.752 [2024-06-11 08:10:26.298873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.752 [2024-06-11 08:10:26.298888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.752 [2024-06-11 08:10:26.311814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.752 [2024-06-11 08:10:26.311829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.752 [2024-06-11 08:10:26.324873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.752 [2024-06-11 08:10:26.324887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.752 [2024-06-11 08:10:26.337832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.752 [2024-06-11 08:10:26.337847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.752 [2024-06-11 08:10:26.350701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.752 [2024-06-11 08:10:26.350715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.752 [2024-06-11 08:10:26.363585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.752 [2024-06-11 08:10:26.363599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.752 [2024-06-11 08:10:26.376626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.752 [2024-06-11 08:10:26.376640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:55.752 [2024-06-11 08:10:26.389375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:55.752 [2024-06-11 08:10:26.389389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.012 [2024-06-11 08:10:26.402357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.012 [2024-06-11 08:10:26.402371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.012 [2024-06-11 08:10:26.415426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.012 [2024-06-11 08:10:26.415446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.012 [2024-06-11 08:10:26.428513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.013 [2024-06-11 08:10:26.428528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.013 [2024-06-11 08:10:26.441111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.013 [2024-06-11 08:10:26.441127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.013 [2024-06-11 08:10:26.453933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.013 [2024-06-11 08:10:26.453948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.013 [2024-06-11 08:10:26.466946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.013 [2024-06-11 08:10:26.466961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.013 [2024-06-11 08:10:26.480110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.013 [2024-06-11 08:10:26.480124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.013 [2024-06-11 08:10:26.492938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.013 [2024-06-11 08:10:26.492952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.013 [2024-06-11 08:10:26.506049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.013 [2024-06-11 08:10:26.506064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.013 [2024-06-11 08:10:26.518028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.013 [2024-06-11 08:10:26.518042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.013 [2024-06-11 08:10:26.530803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.013 [2024-06-11 08:10:26.530817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.013 [2024-06-11 08:10:26.543633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.013 [2024-06-11 08:10:26.543648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.013 [2024-06-11 08:10:26.556646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.013 [2024-06-11 08:10:26.556661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.013 [2024-06-11 08:10:26.569736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.013 [2024-06-11 08:10:26.569751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.013 [2024-06-11 08:10:26.582791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.013 [2024-06-11 08:10:26.582809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.013 [2024-06-11 08:10:26.595862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.013 [2024-06-11 08:10:26.595876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.013 [2024-06-11 08:10:26.608483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.013 [2024-06-11 08:10:26.608498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.013 [2024-06-11 08:10:26.621331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.013 [2024-06-11 08:10:26.621346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.013 [2024-06-11 08:10:26.633869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.013 [2024-06-11 08:10:26.633884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.013 [2024-06-11 08:10:26.646844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.013 [2024-06-11 08:10:26.646859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.274 [2024-06-11 08:10:26.659970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.274 [2024-06-11 08:10:26.659985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.274 [2024-06-11 08:10:26.672648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.274 [2024-06-11 08:10:26.672663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.274 [2024-06-11 08:10:26.685721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.274 [2024-06-11 08:10:26.685736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.274 [2024-06-11 08:10:26.698948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.274 [2024-06-11 08:10:26.698963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.274 [2024-06-11 08:10:26.712120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.274 [2024-06-11 08:10:26.712136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.274 [2024-06-11 08:10:26.725073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.274 [2024-06-11 08:10:26.725088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.274 [2024-06-11 08:10:26.738008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.274 [2024-06-11 08:10:26.738022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.274 [2024-06-11 08:10:26.750902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.274 [2024-06-11 08:10:26.750917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.274 [2024-06-11 08:10:26.764016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.274 [2024-06-11 08:10:26.764031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.274 [2024-06-11 08:10:26.776827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.274 [2024-06-11 08:10:26.776841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.274 [2024-06-11 08:10:26.789960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.274 [2024-06-11 08:10:26.789974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.274 [2024-06-11 08:10:26.803100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.274 [2024-06-11 08:10:26.803115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.274 [2024-06-11 08:10:26.815817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.274 [2024-06-11 08:10:26.815832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.274 [2024-06-11 08:10:26.828763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.274 [2024-06-11 08:10:26.828781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.274 [2024-06-11 08:10:26.842028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.274 [2024-06-11 08:10:26.842043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.274 [2024-06-11 08:10:26.855321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.274 [2024-06-11 08:10:26.855336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.274 [2024-06-11 08:10:26.868030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.274 [2024-06-11 08:10:26.868045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.274 [2024-06-11 08:10:26.880850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.274 [2024-06-11 08:10:26.880864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.274 [2024-06-11 08:10:26.893848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.274 [2024-06-11 08:10:26.893862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.274 [2024-06-11 08:10:26.906676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.274 [2024-06-11 08:10:26.906691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.274 [2024-06-11 08:10:26.919769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.274 [2024-06-11 08:10:26.919783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.535 [2024-06-11 08:10:26.932387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.535 [2024-06-11 08:10:26.932401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.535 [2024-06-11 08:10:26.945680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.535 [2024-06-11 08:10:26.945694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.535 [2024-06-11 08:10:26.958207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.535 [2024-06-11 08:10:26.958222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.535 [2024-06-11 08:10:26.970919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.535 [2024-06-11 08:10:26.970934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.535 [2024-06-11 08:10:26.983714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.535 [2024-06-11 08:10:26.983728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.535 [2024-06-11 08:10:26.996464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.535 [2024-06-11 08:10:26.996479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.535 [2024-06-11 08:10:27.009346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.535 [2024-06-11 08:10:27.009360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.535 [2024-06-11 08:10:27.022277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.536 [2024-06-11 08:10:27.022293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.536 [2024-06-11 08:10:27.035058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.536 [2024-06-11 08:10:27.035072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.536 [2024-06-11 08:10:27.047981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.536 [2024-06-11 08:10:27.047996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.536 [2024-06-11 08:10:27.061122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.536 [2024-06-11 08:10:27.061137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.536 [2024-06-11 08:10:27.074040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.536 [2024-06-11 08:10:27.074058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.536 [2024-06-11 08:10:27.086804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.536 [2024-06-11 08:10:27.086818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.536 [2024-06-11 08:10:27.099655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.536 [2024-06-11 08:10:27.099670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.536 [2024-06-11 08:10:27.112358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.536 [2024-06-11 08:10:27.112373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.536 [2024-06-11 08:10:27.125340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.536 [2024-06-11 08:10:27.125355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.536 [2024-06-11 08:10:27.138277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.536 [2024-06-11 08:10:27.138291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.536 [2024-06-11 08:10:27.151302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.536 [2024-06-11 08:10:27.151316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.536 [2024-06-11 08:10:27.164041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.536 [2024-06-11 08:10:27.164056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.536 [2024-06-11 08:10:27.176893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.536 [2024-06-11 08:10:27.176908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.797 [2024-06-11 08:10:27.190122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.797 [2024-06-11 08:10:27.190136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.797 [2024-06-11 08:10:27.203176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.797 [2024-06-11 08:10:27.203190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.797 [2024-06-11 08:10:27.215383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.797 [2024-06-11 08:10:27.215398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.797 [2024-06-11 08:10:27.228331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.797 [2024-06-11 08:10:27.228346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.797 [2024-06-11 08:10:27.240979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.797 [2024-06-11 08:10:27.240993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.797 [2024-06-11 08:10:27.254043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.797 [2024-06-11 08:10:27.254057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.797 [2024-06-11 08:10:27.266883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.797 [2024-06-11 08:10:27.266897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.797 [2024-06-11 08:10:27.279557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.797 [2024-06-11 08:10:27.279571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.798 [2024-06-11 08:10:27.292510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.798 [2024-06-11 08:10:27.292525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.798 [2024-06-11 08:10:27.304737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.798 [2024-06-11 08:10:27.304751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.798 [2024-06-11 08:10:27.317098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.798 [2024-06-11 08:10:27.317116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.798 [2024-06-11 08:10:27.329259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.798 [2024-06-11 08:10:27.329273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.798 [2024-06-11 08:10:27.342146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.798 [2024-06-11 08:10:27.342160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.798 [2024-06-11 08:10:27.355078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.798 [2024-06-11 08:10:27.355092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.798 [2024-06-11 08:10:27.368089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.798 [2024-06-11 08:10:27.368104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.798 [2024-06-11 08:10:27.380771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.798 [2024-06-11 08:10:27.380786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.798 [2024-06-11 08:10:27.393905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.798 [2024-06-11 08:10:27.393920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.798 [2024-06-11 08:10:27.406815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.798 [2024-06-11 08:10:27.406829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.798 [2024-06-11 08:10:27.419450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.798 [2024-06-11 08:10:27.419465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:56.798 [2024-06-11 08:10:27.432260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:56.798 [2024-06-11 08:10:27.432274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.059 [2024-06-11 08:10:27.444707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.059 [2024-06-11 08:10:27.444722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.059 [2024-06-11 08:10:27.457625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.059 [2024-06-11 08:10:27.457640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.059 [2024-06-11 08:10:27.470455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.059 [2024-06-11 08:10:27.470470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.059 [2024-06-11 08:10:27.483385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.059 [2024-06-11 08:10:27.483400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.059 [2024-06-11 08:10:27.496234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.059 [2024-06-11 08:10:27.496249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.059 [2024-06-11 08:10:27.508843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.059 [2024-06-11 08:10:27.508857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.059 [2024-06-11 08:10:27.521633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.059 [2024-06-11 08:10:27.521648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.059 [2024-06-11 08:10:27.534202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.059 [2024-06-11 08:10:27.534217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.059 [2024-06-11 08:10:27.546717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.059 [2024-06-11 08:10:27.546732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.059 [2024-06-11 08:10:27.559782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.059 [2024-06-11 08:10:27.559799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.059 [2024-06-11 08:10:27.572787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.059 [2024-06-11 08:10:27.572802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.059 [2024-06-11 08:10:27.585920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.059 [2024-06-11 08:10:27.585935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.059 [2024-06-11 08:10:27.598918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.059 [2024-06-11 08:10:27.598932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.059 [2024-06-11 08:10:27.611509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.059 [2024-06-11 08:10:27.611523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.059 [2024-06-11 08:10:27.624209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.059 [2024-06-11 08:10:27.624223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.059 [2024-06-11 08:10:27.637218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.059 [2024-06-11 08:10:27.637232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.059 [2024-06-11 08:10:27.650142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.059 [2024-06-11 08:10:27.650157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.059 [2024-06-11 08:10:27.663052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.059 [2024-06-11 08:10:27.663066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.059 [2024-06-11 08:10:27.675911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.059 [2024-06-11 08:10:27.675926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.059 [2024-06-11 08:10:27.689069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.059 [2024-06-11 08:10:27.689083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.059 [2024-06-11 08:10:27.701429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.059 [2024-06-11 08:10:27.701447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.321 [2024-06-11 08:10:27.714693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.321 [2024-06-11 08:10:27.714708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.321 [2024-06-11 08:10:27.727696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.321 [2024-06-11 08:10:27.727710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.321 [2024-06-11 08:10:27.740743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.321 [2024-06-11 08:10:27.740758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.321 [2024-06-11 08:10:27.753598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.321 [2024-06-11 08:10:27.753612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.321 [2024-06-11 08:10:27.766279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.321 [2024-06-11 08:10:27.766293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.321 [2024-06-11 08:10:27.779463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.321 [2024-06-11 08:10:27.779478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.321 [2024-06-11 08:10:27.792457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.321 [2024-06-11 08:10:27.792472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.321 [2024-06-11 08:10:27.805237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.321 [2024-06-11 08:10:27.805252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.321 [2024-06-11 08:10:27.817648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.321 [2024-06-11 08:10:27.817663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.321 [2024-06-11 08:10:27.830388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.321 [2024-06-11 08:10:27.830403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.321 [2024-06-11 08:10:27.843369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.321 [2024-06-11 08:10:27.843384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.321 [2024-06-11 08:10:27.855442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.321 [2024-06-11 08:10:27.855455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.321 [2024-06-11 08:10:27.867952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.321 [2024-06-11 08:10:27.867967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.321 [2024-06-11 08:10:27.881334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.321 [2024-06-11 08:10:27.881348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.321 [2024-06-11 08:10:27.894224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.321 [2024-06-11 08:10:27.894238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.321 [2024-06-11 08:10:27.907167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.321 [2024-06-11 08:10:27.907181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.321 [2024-06-11 08:10:27.920207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.321 [2024-06-11 08:10:27.920222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.321 [2024-06-11 08:10:27.933214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.321 [2024-06-11 08:10:27.933228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.321 [2024-06-11 08:10:27.946007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.321 [2024-06-11 08:10:27.946021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.321 [2024-06-11 08:10:27.958887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.321 [2024-06-11 08:10:27.958901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.582 [2024-06-11 08:10:27.971757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.582 [2024-06-11 08:10:27.971772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.582 [2024-06-11 08:10:27.984656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.582 [2024-06-11 08:10:27.984671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.582 [2024-06-11 08:10:27.997186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.582 [2024-06-11 08:10:27.997200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.582 [2024-06-11 08:10:28.010280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.582 [2024-06-11 08:10:28.010294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.582 [2024-06-11 08:10:28.023329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.582 [2024-06-11 08:10:28.023344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.582 [2024-06-11 08:10:28.035792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.582 [2024-06-11 08:10:28.035806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.582 [2024-06-11 08:10:28.048734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.582 [2024-06-11 08:10:28.048748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.582 [2024-06-11 08:10:28.061340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.582 [2024-06-11 08:10:28.061354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.582 [2024-06-11 08:10:28.073743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.582 [2024-06-11 08:10:28.073758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.582 [2024-06-11 08:10:28.086180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.582 [2024-06-11 08:10:28.086195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.582 [2024-06-11 08:10:28.098968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.582 [2024-06-11 08:10:28.098982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.582 [2024-06-11 08:10:28.111482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.582 [2024-06-11 08:10:28.111496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.582 [2024-06-11 08:10:28.124406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.582 [2024-06-11 08:10:28.124420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.582 [2024-06-11 08:10:28.134756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.582 [2024-06-11 08:10:28.134770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.582 00:17:57.582 Latency(us) 00:17:57.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.582 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:57.582 Nvme1n1 : 5.01 20282.64 158.46 0.00 0.00 6304.29 2744.32 14636.37 00:17:57.582 =================================================================================================================== 00:17:57.582 Total : 20282.64 158.46 0.00 0.00 6304.29 2744.32 14636.37 00:17:57.582 [2024-06-11 08:10:28.146081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.582 [2024-06-11 08:10:28.146092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.582 [2024-06-11 08:10:28.158119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.582 [2024-06-11 08:10:28.158134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.582 [2024-06-11 08:10:28.170145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.582 [2024-06-11 08:10:28.170157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.582 [2024-06-11 08:10:28.182177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.582 [2024-06-11 08:10:28.182189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.582 [2024-06-11 08:10:28.194205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.582 [2024-06-11 08:10:28.194216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.582 [2024-06-11 08:10:28.206234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.582 [2024-06-11 08:10:28.206243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.582 [2024-06-11 08:10:28.218264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.582 [2024-06-11 08:10:28.218272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.842 [2024-06-11 08:10:28.230297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.842 [2024-06-11 08:10:28.230308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.842 [2024-06-11 08:10:28.242327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.842 [2024-06-11 08:10:28.242336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.842 [2024-06-11 08:10:28.254360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.842 [2024-06-11 08:10:28.254369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.842 [2024-06-11 08:10:28.266391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.842 [2024-06-11 08:10:28.266399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1041528) - No such process 00:17:57.843 08:10:28 -- target/zcopy.sh@49 -- # wait 1041528 00:17:57.843 08:10:28 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:57.843 08:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:57.843 08:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:57.843 08:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:57.843 08:10:28 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:57.843 08:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:57.843 08:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:57.843 delay0 00:17:57.843 08:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:57.843 08:10:28 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:57.843 08:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:57.843 08:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:57.843 08:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:57.843 08:10:28 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:57.843 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.843 [2024-06-11 08:10:28.375510] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:05.981 Initializing NVMe Controllers 00:18:05.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:05.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:05.981 Initialization complete. Launching workers. 00:18:05.981 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 238, failed: 31305 00:18:05.981 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 31423, failed to submit 120 00:18:05.981 success 31327, unsuccess 96, failed 0 00:18:05.981 08:10:35 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:05.981 08:10:35 -- target/zcopy.sh@60 -- # nvmftestfini 00:18:05.981 08:10:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:05.981 08:10:35 -- nvmf/common.sh@116 -- # sync 00:18:05.981 08:10:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:05.981 08:10:35 -- nvmf/common.sh@119 -- # set +e 00:18:05.981 08:10:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:05.981 08:10:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:05.981 rmmod nvme_tcp 00:18:05.981 rmmod nvme_fabrics 00:18:05.981 rmmod nvme_keyring 00:18:05.981 08:10:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:05.981 08:10:35 -- nvmf/common.sh@123 -- # set -e 00:18:05.981 08:10:35 -- nvmf/common.sh@124 -- # return 0 00:18:05.981 08:10:35 -- nvmf/common.sh@477 -- # '[' -n 1039246 ']' 00:18:05.981 08:10:35 -- nvmf/common.sh@478 -- # killprocess 1039246 00:18:05.981 08:10:35 -- common/autotest_common.sh@926 -- # '[' -z 1039246 ']' 00:18:05.981 08:10:35 -- common/autotest_common.sh@930 -- # kill -0 1039246 00:18:05.981 08:10:35 -- common/autotest_common.sh@931 -- # uname 00:18:05.981 08:10:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:05.981 08:10:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1039246 00:18:05.981 08:10:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:05.981 08:10:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:05.981 08:10:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1039246' 00:18:05.981 killing process with pid 1039246 00:18:05.981 08:10:35 -- common/autotest_common.sh@945 -- # kill 1039246 00:18:05.981 08:10:35 -- common/autotest_common.sh@950 -- # wait 1039246 00:18:05.981 08:10:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:05.981 08:10:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:05.981 08:10:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:05.981 08:10:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:05.981 08:10:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:05.981 08:10:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.981 08:10:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.981 08:10:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.367 08:10:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:07.367 00:18:07.367 real 0m34.057s 00:18:07.367 user 0m45.428s 00:18:07.367 sys 0m11.016s 00:18:07.367 08:10:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:07.367 08:10:37 -- common/autotest_common.sh@10 -- # set +x 00:18:07.367 ************************************ 00:18:07.367 END TEST nvmf_zcopy 00:18:07.367 ************************************ 00:18:07.367 08:10:37 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:07.367 08:10:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:07.367 08:10:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:07.367 08:10:37 -- common/autotest_common.sh@10 -- # set +x 00:18:07.367 ************************************ 00:18:07.367 START TEST nvmf_nmic 00:18:07.367 ************************************ 00:18:07.367 08:10:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:07.367 * Looking for test storage... 00:18:07.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:07.367 08:10:37 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:07.367 08:10:37 -- nvmf/common.sh@7 -- # uname -s 00:18:07.367 08:10:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.367 08:10:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.367 08:10:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.367 08:10:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.367 08:10:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.367 08:10:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.367 08:10:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.367 08:10:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.367 08:10:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.367 08:10:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.367 08:10:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:07.367 08:10:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:07.367 08:10:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.367 08:10:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.367 08:10:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:07.367 08:10:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:07.367 08:10:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.367 08:10:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.367 08:10:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.367 08:10:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.367 08:10:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.367 08:10:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.367 08:10:37 -- paths/export.sh@5 -- # export PATH 00:18:07.367 08:10:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.367 08:10:37 -- nvmf/common.sh@46 -- # : 0 00:18:07.367 08:10:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:07.367 08:10:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:07.367 08:10:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:07.367 08:10:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.367 08:10:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.367 08:10:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:07.367 08:10:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:07.367 08:10:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:07.367 08:10:37 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:07.367 08:10:37 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:07.367 08:10:37 -- target/nmic.sh@14 -- # nvmftestinit 00:18:07.367 08:10:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:07.367 08:10:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.367 08:10:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:07.367 08:10:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:07.367 08:10:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:07.367 08:10:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.367 08:10:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.367 08:10:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.367 08:10:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:07.367 08:10:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:07.367 08:10:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:07.367 08:10:37 -- common/autotest_common.sh@10 -- # set +x 00:18:14.181 08:10:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:14.181 08:10:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:14.181 08:10:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:14.181 08:10:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:14.181 08:10:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:14.181 08:10:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:14.181 08:10:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:14.181 08:10:44 -- nvmf/common.sh@294 -- # net_devs=() 00:18:14.181 08:10:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:14.181 08:10:44 -- nvmf/common.sh@295 -- # e810=() 00:18:14.181 08:10:44 -- nvmf/common.sh@295 -- # local -ga e810 00:18:14.181 08:10:44 -- nvmf/common.sh@296 -- # x722=() 00:18:14.181 08:10:44 -- nvmf/common.sh@296 -- # local -ga x722 00:18:14.181 08:10:44 -- nvmf/common.sh@297 -- # mlx=() 00:18:14.181 08:10:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:14.181 08:10:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:14.181 08:10:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:14.181 08:10:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:14.181 08:10:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:14.181 08:10:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:14.181 08:10:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:14.181 08:10:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:14.181 08:10:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:14.181 08:10:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:14.181 08:10:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:14.182 08:10:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:14.182 08:10:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:14.182 08:10:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:14.182 08:10:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:14.182 08:10:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:14.182 08:10:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:14.182 08:10:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:14.182 08:10:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:14.182 08:10:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:14.182 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:14.182 08:10:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:14.182 08:10:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:14.182 08:10:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:14.182 08:10:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:14.182 08:10:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:14.182 08:10:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:14.182 08:10:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:14.182 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:14.182 08:10:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:14.182 08:10:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:14.182 08:10:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:14.182 08:10:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:14.182 08:10:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:14.182 08:10:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:14.182 08:10:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:14.182 08:10:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:14.182 08:10:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:14.182 08:10:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:14.182 08:10:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:14.182 08:10:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:14.182 08:10:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:14.182 Found net devices under 0000:31:00.0: cvl_0_0 00:18:14.182 08:10:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:14.182 08:10:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:14.182 08:10:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:14.182 08:10:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:14.182 08:10:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:14.182 08:10:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:14.182 Found net devices under 0000:31:00.1: cvl_0_1 00:18:14.182 08:10:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:14.182 08:10:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:14.182 08:10:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:14.182 08:10:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:14.182 08:10:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:14.182 08:10:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:14.182 08:10:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.182 08:10:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:14.182 08:10:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:14.182 08:10:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:14.182 08:10:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:14.182 08:10:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:14.182 08:10:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:14.182 08:10:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:14.182 08:10:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.182 08:10:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:14.182 08:10:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:14.182 08:10:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:14.182 08:10:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:14.443 08:10:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:14.443 08:10:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:14.443 08:10:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:14.443 08:10:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:14.443 08:10:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:14.443 08:10:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:14.443 08:10:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:14.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:18:14.443 00:18:14.443 --- 10.0.0.2 ping statistics --- 00:18:14.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.443 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:18:14.443 08:10:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:14.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:18:14.443 00:18:14.443 --- 10.0.0.1 ping statistics --- 00:18:14.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.443 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:18:14.443 08:10:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.443 08:10:45 -- nvmf/common.sh@410 -- # return 0 00:18:14.443 08:10:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:14.443 08:10:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.443 08:10:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:14.443 08:10:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:14.443 08:10:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.443 08:10:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:14.443 08:10:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:14.704 08:10:45 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:14.704 08:10:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:14.704 08:10:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:14.704 08:10:45 -- common/autotest_common.sh@10 -- # set +x 00:18:14.704 08:10:45 -- nvmf/common.sh@469 -- # nvmfpid=1048329 00:18:14.704 08:10:45 -- nvmf/common.sh@470 -- # waitforlisten 1048329 00:18:14.704 08:10:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:14.704 08:10:45 -- common/autotest_common.sh@819 -- # '[' -z 1048329 ']' 00:18:14.704 08:10:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.704 08:10:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:14.704 08:10:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.704 08:10:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:14.704 08:10:45 -- common/autotest_common.sh@10 -- # set +x 00:18:14.704 [2024-06-11 08:10:45.154599] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:14.704 [2024-06-11 08:10:45.154646] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.704 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.704 [2024-06-11 08:10:45.220657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:14.704 [2024-06-11 08:10:45.284718] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:14.704 [2024-06-11 08:10:45.284853] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.704 [2024-06-11 08:10:45.284864] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.704 [2024-06-11 08:10:45.284872] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.704 [2024-06-11 08:10:45.285014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.704 [2024-06-11 08:10:45.285139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.704 [2024-06-11 08:10:45.285296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.704 [2024-06-11 08:10:45.285297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:15.276 08:10:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:15.276 08:10:45 -- common/autotest_common.sh@852 -- # return 0 00:18:15.276 08:10:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:15.537 08:10:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:15.537 08:10:45 -- common/autotest_common.sh@10 -- # set +x 00:18:15.537 08:10:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.537 08:10:45 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:15.537 08:10:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:15.537 08:10:45 -- common/autotest_common.sh@10 -- # set +x 00:18:15.537 [2024-06-11 08:10:45.966664] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.537 08:10:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:15.537 08:10:45 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:15.537 08:10:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:15.537 08:10:45 -- common/autotest_common.sh@10 -- # set +x 00:18:15.537 Malloc0 00:18:15.537 08:10:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:15.537 08:10:45 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:15.537 08:10:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:15.537 08:10:45 -- common/autotest_common.sh@10 -- # set +x 00:18:15.537 08:10:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:15.537 08:10:46 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:15.537 08:10:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:15.537 08:10:46 -- common/autotest_common.sh@10 -- # set +x 00:18:15.537 08:10:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:15.537 08:10:46 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:15.537 08:10:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:15.537 08:10:46 -- common/autotest_common.sh@10 -- # set +x 00:18:15.537 [2024-06-11 08:10:46.023576] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:15.537 08:10:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:15.537 08:10:46 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:15.537 test case1: single bdev can't be used in multiple subsystems 00:18:15.537 08:10:46 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:15.537 08:10:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:15.537 08:10:46 -- common/autotest_common.sh@10 -- # set +x 00:18:15.537 08:10:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:15.537 08:10:46 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:15.537 08:10:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:15.537 08:10:46 -- common/autotest_common.sh@10 -- # set +x 00:18:15.537 08:10:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:15.537 08:10:46 -- target/nmic.sh@28 -- # nmic_status=0 00:18:15.537 08:10:46 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:15.537 08:10:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:15.537 08:10:46 -- common/autotest_common.sh@10 -- # set +x 00:18:15.537 [2024-06-11 08:10:46.059534] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:15.537 [2024-06-11 08:10:46.059554] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:15.537 [2024-06-11 08:10:46.059561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:15.537 request: 00:18:15.537 { 00:18:15.537 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:15.537 "namespace": { 00:18:15.537 "bdev_name": "Malloc0" 00:18:15.537 }, 00:18:15.537 "method": "nvmf_subsystem_add_ns", 00:18:15.537 "req_id": 1 00:18:15.537 } 00:18:15.537 Got JSON-RPC error response 00:18:15.537 response: 00:18:15.537 { 00:18:15.537 "code": -32602, 00:18:15.537 "message": "Invalid parameters" 00:18:15.537 } 00:18:15.537 08:10:46 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:18:15.538 08:10:46 -- target/nmic.sh@29 -- # nmic_status=1 00:18:15.538 08:10:46 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:15.538 08:10:46 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:15.538 Adding namespace failed - expected result. 00:18:15.538 08:10:46 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:15.538 test case2: host connect to nvmf target in multiple paths 00:18:15.538 08:10:46 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:15.538 08:10:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:15.538 08:10:46 -- common/autotest_common.sh@10 -- # set +x 00:18:15.538 [2024-06-11 08:10:46.071696] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:15.538 08:10:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:15.538 08:10:46 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:17.452 08:10:47 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:18.839 08:10:49 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:18.839 08:10:49 -- common/autotest_common.sh@1177 -- # local i=0 00:18:18.839 08:10:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:18.839 08:10:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:18.839 08:10:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:20.771 08:10:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:20.771 08:10:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:20.771 08:10:51 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:20.771 08:10:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:20.771 08:10:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:20.771 08:10:51 -- common/autotest_common.sh@1187 -- # return 0 00:18:20.771 08:10:51 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:20.771 [global] 00:18:20.771 thread=1 00:18:20.771 invalidate=1 00:18:20.771 rw=write 00:18:20.771 time_based=1 00:18:20.771 runtime=1 00:18:20.771 ioengine=libaio 00:18:20.771 direct=1 00:18:20.771 bs=4096 00:18:20.771 iodepth=1 00:18:20.771 norandommap=0 00:18:20.771 numjobs=1 00:18:20.771 00:18:20.771 verify_dump=1 00:18:20.771 verify_backlog=512 00:18:20.771 verify_state_save=0 00:18:20.771 do_verify=1 00:18:20.771 verify=crc32c-intel 00:18:20.771 [job0] 00:18:20.771 filename=/dev/nvme0n1 00:18:20.771 Could not set queue depth (nvme0n1) 00:18:21.029 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:21.029 fio-3.35 00:18:21.029 Starting 1 thread 00:18:21.960 00:18:21.960 job0: (groupid=0, jobs=1): err= 0: pid=1049833: Tue Jun 11 08:10:52 2024 00:18:21.960 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:18:21.960 slat (nsec): min=6309, max=58416, avg=22374.62, stdev=6465.09 00:18:21.960 clat (usec): min=708, max=1143, avg=958.39, stdev=73.56 00:18:21.960 lat (usec): min=715, max=1167, avg=980.77, stdev=75.93 00:18:21.960 clat percentiles (usec): 00:18:21.960 | 1.00th=[ 766], 5.00th=[ 807], 10.00th=[ 857], 20.00th=[ 898], 00:18:21.960 | 30.00th=[ 930], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 988], 00:18:21.960 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1057], 00:18:21.960 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1139], 99.95th=[ 1139], 00:18:21.960 | 99.99th=[ 1139] 00:18:21.960 write: IOPS=817, BW=3269KiB/s (3347kB/s)(3272KiB/1001msec); 0 zone resets 00:18:21.960 slat (nsec): min=9052, max=64175, avg=26242.56, stdev=10374.57 00:18:21.961 clat (usec): min=180, max=866, avg=571.14, stdev=100.34 00:18:21.961 lat (usec): min=190, max=878, avg=597.38, stdev=104.80 00:18:21.961 clat percentiles (usec): 00:18:21.961 | 1.00th=[ 297], 5.00th=[ 396], 10.00th=[ 433], 20.00th=[ 486], 00:18:21.961 | 30.00th=[ 529], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 594], 00:18:21.961 | 70.00th=[ 627], 80.00th=[ 660], 90.00th=[ 693], 95.00th=[ 717], 00:18:21.961 | 99.00th=[ 758], 99.50th=[ 783], 99.90th=[ 865], 99.95th=[ 865], 00:18:21.961 | 99.99th=[ 865] 00:18:21.961 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:21.961 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:21.961 lat (usec) : 250=0.08%, 500=14.29%, 750=46.69%, 1000=27.82% 00:18:21.961 lat (msec) : 2=11.13% 00:18:21.961 cpu : usr=2.20%, sys=3.50%, ctx=1330, majf=0, minf=1 00:18:21.961 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:21.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.961 issued rwts: total=512,818,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.961 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:21.961 00:18:21.961 Run status group 0 (all jobs): 00:18:21.961 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:18:21.961 WRITE: bw=3269KiB/s (3347kB/s), 3269KiB/s-3269KiB/s (3347kB/s-3347kB/s), io=3272KiB (3351kB), run=1001-1001msec 00:18:21.961 00:18:21.961 Disk stats (read/write): 00:18:21.961 nvme0n1: ios=562/652, merge=0/0, ticks=541/328, in_queue=869, util=93.59% 00:18:21.961 08:10:52 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:22.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:22.218 08:10:52 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:22.218 08:10:52 -- common/autotest_common.sh@1198 -- # local i=0 00:18:22.218 08:10:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:22.218 08:10:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:22.218 08:10:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:22.218 08:10:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:22.218 08:10:52 -- common/autotest_common.sh@1210 -- # return 0 00:18:22.218 08:10:52 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:22.218 08:10:52 -- target/nmic.sh@53 -- # nvmftestfini 00:18:22.218 08:10:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:22.218 08:10:52 -- nvmf/common.sh@116 -- # sync 00:18:22.218 08:10:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:22.218 08:10:52 -- nvmf/common.sh@119 -- # set +e 00:18:22.218 08:10:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:22.218 08:10:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:22.218 rmmod nvme_tcp 00:18:22.218 rmmod nvme_fabrics 00:18:22.218 rmmod nvme_keyring 00:18:22.479 08:10:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:22.479 08:10:52 -- nvmf/common.sh@123 -- # set -e 00:18:22.479 08:10:52 -- nvmf/common.sh@124 -- # return 0 00:18:22.479 08:10:52 -- nvmf/common.sh@477 -- # '[' -n 1048329 ']' 00:18:22.479 08:10:52 -- nvmf/common.sh@478 -- # killprocess 1048329 00:18:22.479 08:10:52 -- common/autotest_common.sh@926 -- # '[' -z 1048329 ']' 00:18:22.479 08:10:52 -- common/autotest_common.sh@930 -- # kill -0 1048329 00:18:22.479 08:10:52 -- common/autotest_common.sh@931 -- # uname 00:18:22.479 08:10:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:22.479 08:10:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1048329 00:18:22.479 08:10:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:22.479 08:10:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:22.479 08:10:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1048329' 00:18:22.479 killing process with pid 1048329 00:18:22.479 08:10:52 -- common/autotest_common.sh@945 -- # kill 1048329 00:18:22.479 08:10:52 -- common/autotest_common.sh@950 -- # wait 1048329 00:18:22.479 08:10:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:22.479 08:10:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:22.479 08:10:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:22.479 08:10:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:22.479 08:10:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:22.479 08:10:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.479 08:10:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.479 08:10:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.025 08:10:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:25.025 00:18:25.025 real 0m17.349s 00:18:25.025 user 0m47.814s 00:18:25.025 sys 0m6.031s 00:18:25.025 08:10:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:25.025 08:10:55 -- common/autotest_common.sh@10 -- # set +x 00:18:25.025 ************************************ 00:18:25.025 END TEST nvmf_nmic 00:18:25.025 ************************************ 00:18:25.025 08:10:55 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:25.025 08:10:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:25.025 08:10:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:25.025 08:10:55 -- common/autotest_common.sh@10 -- # set +x 00:18:25.025 ************************************ 00:18:25.025 START TEST nvmf_fio_target 00:18:25.025 ************************************ 00:18:25.025 08:10:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:25.025 * Looking for test storage... 00:18:25.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:25.025 08:10:55 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.025 08:10:55 -- nvmf/common.sh@7 -- # uname -s 00:18:25.025 08:10:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.025 08:10:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.025 08:10:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.025 08:10:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.025 08:10:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.025 08:10:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.025 08:10:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.025 08:10:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.025 08:10:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.025 08:10:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.025 08:10:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:25.025 08:10:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:25.025 08:10:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.025 08:10:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.025 08:10:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.025 08:10:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:25.025 08:10:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.025 08:10:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.025 08:10:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.025 08:10:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.025 08:10:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.025 08:10:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.025 08:10:55 -- paths/export.sh@5 -- # export PATH 00:18:25.025 08:10:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.025 08:10:55 -- nvmf/common.sh@46 -- # : 0 00:18:25.025 08:10:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:25.025 08:10:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:25.025 08:10:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:25.025 08:10:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.025 08:10:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.025 08:10:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:25.025 08:10:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:25.025 08:10:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:25.025 08:10:55 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:25.025 08:10:55 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:25.025 08:10:55 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:25.025 08:10:55 -- target/fio.sh@16 -- # nvmftestinit 00:18:25.025 08:10:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:25.025 08:10:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.025 08:10:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:25.025 08:10:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:25.025 08:10:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:25.025 08:10:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.025 08:10:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.025 08:10:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.025 08:10:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:25.025 08:10:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:25.025 08:10:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:25.025 08:10:55 -- common/autotest_common.sh@10 -- # set +x 00:18:31.641 08:11:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:31.641 08:11:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:31.641 08:11:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:31.641 08:11:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:31.641 08:11:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:31.641 08:11:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:31.641 08:11:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:31.641 08:11:02 -- nvmf/common.sh@294 -- # net_devs=() 00:18:31.641 08:11:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:31.641 08:11:02 -- nvmf/common.sh@295 -- # e810=() 00:18:31.641 08:11:02 -- nvmf/common.sh@295 -- # local -ga e810 00:18:31.641 08:11:02 -- nvmf/common.sh@296 -- # x722=() 00:18:31.641 08:11:02 -- nvmf/common.sh@296 -- # local -ga x722 00:18:31.641 08:11:02 -- nvmf/common.sh@297 -- # mlx=() 00:18:31.641 08:11:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:31.641 08:11:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.641 08:11:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.641 08:11:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.641 08:11:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.641 08:11:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.641 08:11:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.641 08:11:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.641 08:11:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.641 08:11:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.641 08:11:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.641 08:11:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.641 08:11:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:31.641 08:11:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:31.641 08:11:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:31.641 08:11:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:31.641 08:11:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:31.641 08:11:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:31.642 08:11:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:31.642 08:11:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:31.642 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:31.642 08:11:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:31.642 08:11:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:31.642 08:11:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.642 08:11:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.642 08:11:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:31.642 08:11:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:31.642 08:11:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:31.642 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:31.642 08:11:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:31.642 08:11:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:31.642 08:11:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.642 08:11:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.642 08:11:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:31.642 08:11:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:31.642 08:11:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:31.642 08:11:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:31.642 08:11:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:31.642 08:11:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.642 08:11:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:31.642 08:11:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.642 08:11:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:31.642 Found net devices under 0000:31:00.0: cvl_0_0 00:18:31.642 08:11:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.642 08:11:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:31.642 08:11:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.642 08:11:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:31.642 08:11:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.642 08:11:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:31.642 Found net devices under 0000:31:00.1: cvl_0_1 00:18:31.642 08:11:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.642 08:11:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:31.642 08:11:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:31.642 08:11:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:31.642 08:11:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:31.642 08:11:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:31.642 08:11:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.642 08:11:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.642 08:11:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:31.642 08:11:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:31.642 08:11:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:31.642 08:11:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:31.642 08:11:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:31.910 08:11:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:31.910 08:11:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.910 08:11:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:31.910 08:11:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:31.910 08:11:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:31.910 08:11:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:31.910 08:11:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:31.910 08:11:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:31.910 08:11:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:31.910 08:11:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:31.910 08:11:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:32.170 08:11:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:32.170 08:11:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:32.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:18:32.170 00:18:32.170 --- 10.0.0.2 ping statistics --- 00:18:32.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.170 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:18:32.170 08:11:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:32.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:18:32.171 00:18:32.171 --- 10.0.0.1 ping statistics --- 00:18:32.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.171 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:18:32.171 08:11:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.171 08:11:02 -- nvmf/common.sh@410 -- # return 0 00:18:32.171 08:11:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:32.171 08:11:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.171 08:11:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:32.171 08:11:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:32.171 08:11:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.171 08:11:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:32.171 08:11:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:32.171 08:11:02 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:32.171 08:11:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:32.171 08:11:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:32.171 08:11:02 -- common/autotest_common.sh@10 -- # set +x 00:18:32.171 08:11:02 -- nvmf/common.sh@469 -- # nvmfpid=1054307 00:18:32.171 08:11:02 -- nvmf/common.sh@470 -- # waitforlisten 1054307 00:18:32.171 08:11:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:32.171 08:11:02 -- common/autotest_common.sh@819 -- # '[' -z 1054307 ']' 00:18:32.171 08:11:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.171 08:11:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:32.171 08:11:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.171 08:11:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:32.171 08:11:02 -- common/autotest_common.sh@10 -- # set +x 00:18:32.171 [2024-06-11 08:11:02.665135] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:32.171 [2024-06-11 08:11:02.665192] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.171 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.171 [2024-06-11 08:11:02.737006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:32.171 [2024-06-11 08:11:02.809764] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:32.171 [2024-06-11 08:11:02.809895] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.171 [2024-06-11 08:11:02.809906] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.171 [2024-06-11 08:11:02.809914] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.171 [2024-06-11 08:11:02.810082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.171 [2024-06-11 08:11:02.810213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.171 [2024-06-11 08:11:02.810371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.171 [2024-06-11 08:11:02.810371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:33.102 08:11:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:33.102 08:11:03 -- common/autotest_common.sh@852 -- # return 0 00:18:33.102 08:11:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:33.102 08:11:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:33.102 08:11:03 -- common/autotest_common.sh@10 -- # set +x 00:18:33.102 08:11:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.102 08:11:03 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:33.102 [2024-06-11 08:11:03.622091] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.102 08:11:03 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:33.359 08:11:03 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:33.359 08:11:03 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:33.617 08:11:04 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:33.617 08:11:04 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:33.617 08:11:04 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:33.617 08:11:04 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:33.874 08:11:04 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:33.874 08:11:04 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:34.132 08:11:04 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:34.132 08:11:04 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:34.132 08:11:04 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:34.390 08:11:04 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:34.390 08:11:04 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:34.390 08:11:05 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:34.390 08:11:05 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:34.646 08:11:05 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:34.904 08:11:05 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:34.904 08:11:05 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:34.904 08:11:05 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:34.904 08:11:05 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:35.161 08:11:05 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:35.161 [2024-06-11 08:11:05.779677] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.417 08:11:05 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:35.417 08:11:05 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:35.674 08:11:06 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:37.063 08:11:07 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:37.063 08:11:07 -- common/autotest_common.sh@1177 -- # local i=0 00:18:37.063 08:11:07 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:37.063 08:11:07 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:18:37.063 08:11:07 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:18:37.063 08:11:07 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:38.980 08:11:09 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:38.980 08:11:09 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:38.980 08:11:09 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:38.980 08:11:09 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:18:38.980 08:11:09 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:38.980 08:11:09 -- common/autotest_common.sh@1187 -- # return 0 00:18:38.980 08:11:09 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:38.980 [global] 00:18:38.980 thread=1 00:18:38.980 invalidate=1 00:18:38.980 rw=write 00:18:38.980 time_based=1 00:18:38.980 runtime=1 00:18:38.980 ioengine=libaio 00:18:38.980 direct=1 00:18:38.980 bs=4096 00:18:38.980 iodepth=1 00:18:38.980 norandommap=0 00:18:38.980 numjobs=1 00:18:38.980 00:18:38.980 verify_dump=1 00:18:38.980 verify_backlog=512 00:18:38.980 verify_state_save=0 00:18:38.980 do_verify=1 00:18:38.980 verify=crc32c-intel 00:18:39.272 [job0] 00:18:39.272 filename=/dev/nvme0n1 00:18:39.272 [job1] 00:18:39.272 filename=/dev/nvme0n2 00:18:39.272 [job2] 00:18:39.272 filename=/dev/nvme0n3 00:18:39.272 [job3] 00:18:39.272 filename=/dev/nvme0n4 00:18:39.272 Could not set queue depth (nvme0n1) 00:18:39.272 Could not set queue depth (nvme0n2) 00:18:39.272 Could not set queue depth (nvme0n3) 00:18:39.272 Could not set queue depth (nvme0n4) 00:18:39.537 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:39.537 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:39.537 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:39.537 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:39.537 fio-3.35 00:18:39.537 Starting 4 threads 00:18:40.939 00:18:40.939 job0: (groupid=0, jobs=1): err= 0: pid=1055930: Tue Jun 11 08:11:11 2024 00:18:40.939 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:18:40.939 slat (nsec): min=6758, max=43104, avg=24213.58, stdev=3863.71 00:18:40.939 clat (usec): min=456, max=1171, avg=895.15, stdev=147.44 00:18:40.939 lat (usec): min=481, max=1195, avg=919.36, stdev=147.76 00:18:40.939 clat percentiles (usec): 00:18:40.939 | 1.00th=[ 570], 5.00th=[ 652], 10.00th=[ 709], 20.00th=[ 758], 00:18:40.939 | 30.00th=[ 791], 40.00th=[ 840], 50.00th=[ 906], 60.00th=[ 963], 00:18:40.939 | 70.00th=[ 1004], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1106], 00:18:40.939 | 99.00th=[ 1139], 99.50th=[ 1139], 99.90th=[ 1172], 99.95th=[ 1172], 00:18:40.939 | 99.99th=[ 1172] 00:18:40.939 write: IOPS=756, BW=3025KiB/s (3098kB/s)(3028KiB/1001msec); 0 zone resets 00:18:40.939 slat (nsec): min=9688, max=63049, avg=32486.78, stdev=7097.12 00:18:40.939 clat (usec): min=115, max=1004, avg=653.64, stdev=161.80 00:18:40.939 lat (usec): min=127, max=1037, avg=686.13, stdev=163.08 00:18:40.939 clat percentiles (usec): 00:18:40.939 | 1.00th=[ 253], 5.00th=[ 338], 10.00th=[ 433], 20.00th=[ 529], 00:18:40.939 | 30.00th=[ 586], 40.00th=[ 635], 50.00th=[ 668], 60.00th=[ 709], 00:18:40.939 | 70.00th=[ 750], 80.00th=[ 791], 90.00th=[ 840], 95.00th=[ 898], 00:18:40.939 | 99.00th=[ 955], 99.50th=[ 979], 99.90th=[ 1004], 99.95th=[ 1004], 00:18:40.939 | 99.99th=[ 1004] 00:18:40.939 bw ( KiB/s): min= 4096, max= 4096, per=38.03%, avg=4096.00, stdev= 0.00, samples=1 00:18:40.939 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:40.939 lat (usec) : 250=0.47%, 500=9.30%, 750=39.80%, 1000=37.67% 00:18:40.939 lat (msec) : 2=12.77% 00:18:40.939 cpu : usr=1.60%, sys=4.10%, ctx=1270, majf=0, minf=1 00:18:40.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:40.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.939 issued rwts: total=512,757,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:40.939 job1: (groupid=0, jobs=1): err= 0: pid=1055932: Tue Jun 11 08:11:11 2024 00:18:40.939 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:18:40.939 slat (nsec): min=6998, max=53292, avg=25098.85, stdev=6798.00 00:18:40.939 clat (usec): min=588, max=1451, avg=1058.34, stdev=151.20 00:18:40.939 lat (usec): min=597, max=1477, avg=1083.44, stdev=152.31 00:18:40.939 clat percentiles (usec): 00:18:40.939 | 1.00th=[ 693], 5.00th=[ 783], 10.00th=[ 857], 20.00th=[ 930], 00:18:40.939 | 30.00th=[ 979], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1090], 00:18:40.939 | 70.00th=[ 1139], 80.00th=[ 1205], 90.00th=[ 1254], 95.00th=[ 1303], 00:18:40.939 | 99.00th=[ 1352], 99.50th=[ 1385], 99.90th=[ 1450], 99.95th=[ 1450], 00:18:40.939 | 99.99th=[ 1450] 00:18:40.939 write: IOPS=757, BW=3029KiB/s (3102kB/s)(3032KiB/1001msec); 0 zone resets 00:18:40.939 slat (nsec): min=9170, max=59389, avg=28719.41, stdev=11026.35 00:18:40.939 clat (usec): min=174, max=1064, avg=546.43, stdev=135.65 00:18:40.939 lat (usec): min=184, max=1099, avg=575.15, stdev=140.44 00:18:40.939 clat percentiles (usec): 00:18:40.939 | 1.00th=[ 239], 5.00th=[ 310], 10.00th=[ 367], 20.00th=[ 437], 00:18:40.939 | 30.00th=[ 478], 40.00th=[ 515], 50.00th=[ 553], 60.00th=[ 586], 00:18:40.939 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 725], 95.00th=[ 758], 00:18:40.939 | 99.00th=[ 848], 99.50th=[ 857], 99.90th=[ 1057], 99.95th=[ 1057], 00:18:40.939 | 99.99th=[ 1057] 00:18:40.939 bw ( KiB/s): min= 4096, max= 4096, per=38.03%, avg=4096.00, stdev= 0.00, samples=1 00:18:40.939 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:40.939 lat (usec) : 250=0.87%, 500=21.42%, 750=34.80%, 1000=15.98% 00:18:40.939 lat (msec) : 2=26.93% 00:18:40.940 cpu : usr=2.30%, sys=4.80%, ctx=1272, majf=0, minf=1 00:18:40.940 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:40.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.940 issued rwts: total=512,758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.940 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:40.940 job2: (groupid=0, jobs=1): err= 0: pid=1055946: Tue Jun 11 08:11:11 2024 00:18:40.940 read: IOPS=19, BW=77.5KiB/s (79.4kB/s)(80.0KiB/1032msec) 00:18:40.940 slat (nsec): min=24170, max=27768, avg=24547.95, stdev=771.05 00:18:40.940 clat (usec): min=40909, max=42066, avg=41748.69, stdev=417.77 00:18:40.940 lat (usec): min=40933, max=42094, avg=41773.24, stdev=417.88 00:18:40.940 clat percentiles (usec): 00:18:40.940 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:40.940 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:18:40.940 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:40.940 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:40.940 | 99.99th=[42206] 00:18:40.940 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:18:40.940 slat (nsec): min=9347, max=51756, avg=21393.44, stdev=10939.05 00:18:40.940 clat (usec): min=105, max=886, avg=356.16, stdev=167.96 00:18:40.940 lat (usec): min=118, max=896, avg=377.56, stdev=169.79 00:18:40.940 clat percentiles (usec): 00:18:40.940 | 1.00th=[ 111], 5.00th=[ 119], 10.00th=[ 130], 20.00th=[ 163], 00:18:40.940 | 30.00th=[ 258], 40.00th=[ 289], 50.00th=[ 363], 60.00th=[ 400], 00:18:40.940 | 70.00th=[ 469], 80.00th=[ 515], 90.00th=[ 586], 95.00th=[ 627], 00:18:40.940 | 99.00th=[ 701], 99.50th=[ 717], 99.90th=[ 889], 99.95th=[ 889], 00:18:40.940 | 99.99th=[ 889] 00:18:40.940 bw ( KiB/s): min= 4096, max= 4096, per=38.03%, avg=4096.00, stdev= 0.00, samples=1 00:18:40.940 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:40.940 lat (usec) : 250=27.44%, 500=47.56%, 750=20.86%, 1000=0.38% 00:18:40.940 lat (msec) : 50=3.76% 00:18:40.940 cpu : usr=0.39%, sys=1.16%, ctx=532, majf=0, minf=1 00:18:40.940 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:40.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.940 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.940 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:40.940 job3: (groupid=0, jobs=1): err= 0: pid=1055950: Tue Jun 11 08:11:11 2024 00:18:40.940 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:18:40.940 slat (nsec): min=7231, max=42251, avg=24371.91, stdev=2124.64 00:18:40.940 clat (usec): min=611, max=1196, avg=982.02, stdev=90.35 00:18:40.940 lat (usec): min=635, max=1220, avg=1006.39, stdev=90.43 00:18:40.940 clat percentiles (usec): 00:18:40.940 | 1.00th=[ 734], 5.00th=[ 791], 10.00th=[ 840], 20.00th=[ 922], 00:18:40.940 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1020], 00:18:40.940 | 70.00th=[ 1037], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1090], 00:18:40.940 | 99.00th=[ 1123], 99.50th=[ 1139], 99.90th=[ 1205], 99.95th=[ 1205], 00:18:40.940 | 99.99th=[ 1205] 00:18:40.940 write: IOPS=751, BW=3005KiB/s (3077kB/s)(3008KiB/1001msec); 0 zone resets 00:18:40.940 slat (nsec): min=9151, max=62558, avg=27762.42, stdev=8889.97 00:18:40.940 clat (usec): min=242, max=889, avg=604.20, stdev=110.05 00:18:40.940 lat (usec): min=254, max=920, avg=631.96, stdev=113.47 00:18:40.940 clat percentiles (usec): 00:18:40.940 | 1.00th=[ 334], 5.00th=[ 400], 10.00th=[ 465], 20.00th=[ 506], 00:18:40.940 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:18:40.940 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 734], 95.00th=[ 766], 00:18:40.940 | 99.00th=[ 824], 99.50th=[ 832], 99.90th=[ 889], 99.95th=[ 889], 00:18:40.940 | 99.99th=[ 889] 00:18:40.940 bw ( KiB/s): min= 4096, max= 4096, per=38.03%, avg=4096.00, stdev= 0.00, samples=1 00:18:40.940 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:40.940 lat (usec) : 250=0.08%, 500=11.47%, 750=44.07%, 1000=22.86% 00:18:40.940 lat (msec) : 2=21.52% 00:18:40.940 cpu : usr=2.10%, sys=3.30%, ctx=1264, majf=0, minf=1 00:18:40.940 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:40.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.940 issued rwts: total=512,752,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.940 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:40.940 00:18:40.940 Run status group 0 (all jobs): 00:18:40.940 READ: bw=6031KiB/s (6176kB/s), 77.5KiB/s-2046KiB/s (79.4kB/s-2095kB/s), io=6224KiB (6373kB), run=1001-1032msec 00:18:40.940 WRITE: bw=10.5MiB/s (11.0MB/s), 1984KiB/s-3029KiB/s (2032kB/s-3102kB/s), io=10.9MiB (11.4MB), run=1001-1032msec 00:18:40.940 00:18:40.940 Disk stats (read/write): 00:18:40.940 nvme0n1: ios=537/514, merge=0/0, ticks=1406/306, in_queue=1712, util=96.39% 00:18:40.940 nvme0n2: ios=503/512, merge=0/0, ticks=1436/222, in_queue=1658, util=97.04% 00:18:40.940 nvme0n3: ios=15/512, merge=0/0, ticks=626/171, in_queue=797, util=88.36% 00:18:40.940 nvme0n4: ios=499/512, merge=0/0, ticks=494/298, in_queue=792, util=89.50% 00:18:40.940 08:11:11 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:40.940 [global] 00:18:40.940 thread=1 00:18:40.940 invalidate=1 00:18:40.940 rw=randwrite 00:18:40.940 time_based=1 00:18:40.940 runtime=1 00:18:40.940 ioengine=libaio 00:18:40.940 direct=1 00:18:40.940 bs=4096 00:18:40.940 iodepth=1 00:18:40.940 norandommap=0 00:18:40.940 numjobs=1 00:18:40.940 00:18:40.940 verify_dump=1 00:18:40.940 verify_backlog=512 00:18:40.940 verify_state_save=0 00:18:40.940 do_verify=1 00:18:40.940 verify=crc32c-intel 00:18:40.940 [job0] 00:18:40.940 filename=/dev/nvme0n1 00:18:40.940 [job1] 00:18:40.940 filename=/dev/nvme0n2 00:18:40.940 [job2] 00:18:40.940 filename=/dev/nvme0n3 00:18:40.940 [job3] 00:18:40.940 filename=/dev/nvme0n4 00:18:40.940 Could not set queue depth (nvme0n1) 00:18:40.940 Could not set queue depth (nvme0n2) 00:18:40.940 Could not set queue depth (nvme0n3) 00:18:40.940 Could not set queue depth (nvme0n4) 00:18:41.205 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:41.205 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:41.205 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:41.205 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:41.205 fio-3.35 00:18:41.205 Starting 4 threads 00:18:42.613 00:18:42.613 job0: (groupid=0, jobs=1): err= 0: pid=1056447: Tue Jun 11 08:11:12 2024 00:18:42.613 read: IOPS=170, BW=681KiB/s (698kB/s)(684KiB/1004msec) 00:18:42.613 slat (nsec): min=7687, max=43762, avg=25877.02, stdev=3749.01 00:18:42.613 clat (usec): min=791, max=42959, avg=3938.90, stdev=10575.37 00:18:42.613 lat (usec): min=816, max=42985, avg=3964.77, stdev=10575.39 00:18:42.613 clat percentiles (usec): 00:18:42.613 | 1.00th=[ 807], 5.00th=[ 914], 10.00th=[ 955], 20.00th=[ 1004], 00:18:42.613 | 30.00th=[ 1020], 40.00th=[ 1037], 50.00th=[ 1057], 60.00th=[ 1074], 00:18:42.613 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[42206], 00:18:42.613 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:18:42.613 | 99.99th=[42730] 00:18:42.613 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:18:42.613 slat (nsec): min=8600, max=57882, avg=28304.27, stdev=8863.91 00:18:42.613 clat (usec): min=205, max=2159, avg=598.31, stdev=147.77 00:18:42.613 lat (usec): min=236, max=2195, avg=626.61, stdev=150.87 00:18:42.613 clat percentiles (usec): 00:18:42.613 | 1.00th=[ 289], 5.00th=[ 347], 10.00th=[ 416], 20.00th=[ 482], 00:18:42.613 | 30.00th=[ 529], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 644], 00:18:42.613 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 783], 00:18:42.613 | 99.00th=[ 840], 99.50th=[ 889], 99.90th=[ 2147], 99.95th=[ 2147], 00:18:42.613 | 99.99th=[ 2147] 00:18:42.613 bw ( KiB/s): min= 4096, max= 4096, per=34.90%, avg=4096.00, stdev= 0.00, samples=1 00:18:42.613 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:42.613 lat (usec) : 250=0.44%, 500=17.72%, 750=49.34%, 1000=12.01% 00:18:42.613 lat (msec) : 2=18.59%, 4=0.15%, 50=1.76% 00:18:42.613 cpu : usr=1.60%, sys=2.29%, ctx=683, majf=0, minf=1 00:18:42.613 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:42.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.613 issued rwts: total=171,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.613 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:42.613 job1: (groupid=0, jobs=1): err= 0: pid=1056448: Tue Jun 11 08:11:12 2024 00:18:42.613 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:18:42.613 slat (nsec): min=26615, max=45137, avg=27485.31, stdev=2156.31 00:18:42.613 clat (usec): min=875, max=1435, avg=1176.33, stdev=82.85 00:18:42.613 lat (usec): min=902, max=1462, avg=1203.82, stdev=82.86 00:18:42.613 clat percentiles (usec): 00:18:42.613 | 1.00th=[ 963], 5.00th=[ 1037], 10.00th=[ 1074], 20.00th=[ 1106], 00:18:42.613 | 30.00th=[ 1139], 40.00th=[ 1156], 50.00th=[ 1188], 60.00th=[ 1205], 00:18:42.613 | 70.00th=[ 1221], 80.00th=[ 1254], 90.00th=[ 1287], 95.00th=[ 1303], 00:18:42.613 | 99.00th=[ 1352], 99.50th=[ 1369], 99.90th=[ 1434], 99.95th=[ 1434], 00:18:42.613 | 99.99th=[ 1434] 00:18:42.613 write: IOPS=554, BW=2218KiB/s (2271kB/s)(2220KiB/1001msec); 0 zone resets 00:18:42.613 slat (nsec): min=9001, max=52913, avg=30825.50, stdev=9216.44 00:18:42.613 clat (usec): min=302, max=1000, avg=641.41, stdev=129.99 00:18:42.613 lat (usec): min=311, max=1035, avg=672.24, stdev=133.14 00:18:42.613 clat percentiles (usec): 00:18:42.614 | 1.00th=[ 351], 5.00th=[ 441], 10.00th=[ 474], 20.00th=[ 529], 00:18:42.614 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 676], 00:18:42.614 | 70.00th=[ 717], 80.00th=[ 750], 90.00th=[ 799], 95.00th=[ 865], 00:18:42.614 | 99.00th=[ 963], 99.50th=[ 996], 99.90th=[ 1004], 99.95th=[ 1004], 00:18:42.614 | 99.99th=[ 1004] 00:18:42.614 bw ( KiB/s): min= 4096, max= 4096, per=34.90%, avg=4096.00, stdev= 0.00, samples=1 00:18:42.614 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:42.614 lat (usec) : 500=7.97%, 750=33.93%, 1000=11.53% 00:18:42.614 lat (msec) : 2=46.58% 00:18:42.614 cpu : usr=2.60%, sys=3.90%, ctx=1070, majf=0, minf=1 00:18:42.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:42.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.614 issued rwts: total=512,555,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:42.614 job2: (groupid=0, jobs=1): err= 0: pid=1056449: Tue Jun 11 08:11:12 2024 00:18:42.614 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:18:42.614 slat (nsec): min=7894, max=46815, avg=25998.26, stdev=2915.91 00:18:42.614 clat (usec): min=699, max=1306, avg=1004.40, stdev=118.86 00:18:42.614 lat (usec): min=726, max=1332, avg=1030.40, stdev=119.10 00:18:42.614 clat percentiles (usec): 00:18:42.614 | 1.00th=[ 750], 5.00th=[ 799], 10.00th=[ 840], 20.00th=[ 898], 00:18:42.614 | 30.00th=[ 938], 40.00th=[ 979], 50.00th=[ 1012], 60.00th=[ 1045], 00:18:42.614 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1188], 00:18:42.614 | 99.00th=[ 1270], 99.50th=[ 1303], 99.90th=[ 1303], 99.95th=[ 1303], 00:18:42.614 | 99.99th=[ 1303] 00:18:42.614 write: IOPS=854, BW=3417KiB/s (3499kB/s)(3420KiB/1001msec); 0 zone resets 00:18:42.614 slat (nsec): min=8500, max=62342, avg=26198.60, stdev=10089.62 00:18:42.614 clat (usec): min=137, max=1055, avg=514.69, stdev=139.80 00:18:42.614 lat (usec): min=146, max=1086, avg=540.89, stdev=144.53 00:18:42.614 clat percentiles (usec): 00:18:42.614 | 1.00th=[ 269], 5.00th=[ 310], 10.00th=[ 343], 20.00th=[ 383], 00:18:42.614 | 30.00th=[ 445], 40.00th=[ 469], 50.00th=[ 490], 60.00th=[ 529], 00:18:42.614 | 70.00th=[ 578], 80.00th=[ 627], 90.00th=[ 717], 95.00th=[ 758], 00:18:42.614 | 99.00th=[ 914], 99.50th=[ 963], 99.90th=[ 1057], 99.95th=[ 1057], 00:18:42.614 | 99.99th=[ 1057] 00:18:42.614 bw ( KiB/s): min= 4096, max= 4096, per=34.90%, avg=4096.00, stdev= 0.00, samples=1 00:18:42.614 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:42.614 lat (usec) : 250=0.07%, 500=32.85%, 750=26.41%, 1000=19.31% 00:18:42.614 lat (msec) : 2=21.36% 00:18:42.614 cpu : usr=2.40%, sys=4.70%, ctx=1367, majf=0, minf=1 00:18:42.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:42.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.614 issued rwts: total=512,855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:42.614 job3: (groupid=0, jobs=1): err= 0: pid=1056454: Tue Jun 11 08:11:12 2024 00:18:42.614 read: IOPS=563, BW=2254KiB/s (2308kB/s)(2256KiB/1001msec) 00:18:42.614 slat (nsec): min=6871, max=42443, avg=22875.03, stdev=6099.34 00:18:42.614 clat (usec): min=628, max=1021, avg=837.04, stdev=66.65 00:18:42.614 lat (usec): min=648, max=1049, avg=859.92, stdev=67.56 00:18:42.614 clat percentiles (usec): 00:18:42.614 | 1.00th=[ 685], 5.00th=[ 717], 10.00th=[ 750], 20.00th=[ 783], 00:18:42.614 | 30.00th=[ 807], 40.00th=[ 824], 50.00th=[ 840], 60.00th=[ 857], 00:18:42.614 | 70.00th=[ 873], 80.00th=[ 898], 90.00th=[ 922], 95.00th=[ 938], 00:18:42.614 | 99.00th=[ 988], 99.50th=[ 1004], 99.90th=[ 1020], 99.95th=[ 1020], 00:18:42.614 | 99.99th=[ 1020] 00:18:42.614 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:18:42.614 slat (nsec): min=8899, max=50604, avg=24584.91, stdev=10047.78 00:18:42.614 clat (usec): min=163, max=1002, avg=467.80, stdev=133.64 00:18:42.614 lat (usec): min=195, max=1034, avg=492.39, stdev=138.57 00:18:42.614 clat percentiles (usec): 00:18:42.614 | 1.00th=[ 253], 5.00th=[ 285], 10.00th=[ 314], 20.00th=[ 347], 00:18:42.614 | 30.00th=[ 400], 40.00th=[ 437], 50.00th=[ 453], 60.00th=[ 469], 00:18:42.614 | 70.00th=[ 498], 80.00th=[ 570], 90.00th=[ 676], 95.00th=[ 742], 00:18:42.614 | 99.00th=[ 824], 99.50th=[ 848], 99.90th=[ 979], 99.95th=[ 1004], 00:18:42.614 | 99.99th=[ 1004] 00:18:42.614 bw ( KiB/s): min= 4096, max= 4096, per=34.90%, avg=4096.00, stdev= 0.00, samples=1 00:18:42.614 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:42.614 lat (usec) : 250=0.63%, 500=45.34%, 750=19.46%, 1000=34.32% 00:18:42.614 lat (msec) : 2=0.25% 00:18:42.614 cpu : usr=2.30%, sys=4.10%, ctx=1588, majf=0, minf=1 00:18:42.614 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:42.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.614 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.614 issued rwts: total=564,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.614 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:42.614 00:18:42.614 Run status group 0 (all jobs): 00:18:42.614 READ: bw=7008KiB/s (7176kB/s), 681KiB/s-2254KiB/s (698kB/s-2308kB/s), io=7036KiB (7205kB), run=1001-1004msec 00:18:42.614 WRITE: bw=11.5MiB/s (12.0MB/s), 2040KiB/s-4092KiB/s (2089kB/s-4190kB/s), io=11.5MiB (12.1MB), run=1001-1004msec 00:18:42.614 00:18:42.614 Disk stats (read/write): 00:18:42.614 nvme0n1: ios=217/512, merge=0/0, ticks=527/235, in_queue=762, util=86.77% 00:18:42.614 nvme0n2: ios=452/512, merge=0/0, ticks=1388/258, in_queue=1646, util=97.15% 00:18:42.614 nvme0n3: ios=552/594, merge=0/0, ticks=1015/254, in_queue=1269, util=96.31% 00:18:42.614 nvme0n4: ios=539/769, merge=0/0, ticks=697/334, in_queue=1031, util=91.04% 00:18:42.614 08:11:12 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:42.614 [global] 00:18:42.614 thread=1 00:18:42.614 invalidate=1 00:18:42.614 rw=write 00:18:42.614 time_based=1 00:18:42.614 runtime=1 00:18:42.614 ioengine=libaio 00:18:42.614 direct=1 00:18:42.614 bs=4096 00:18:42.614 iodepth=128 00:18:42.614 norandommap=0 00:18:42.614 numjobs=1 00:18:42.614 00:18:42.614 verify_dump=1 00:18:42.614 verify_backlog=512 00:18:42.614 verify_state_save=0 00:18:42.614 do_verify=1 00:18:42.614 verify=crc32c-intel 00:18:42.614 [job0] 00:18:42.614 filename=/dev/nvme0n1 00:18:42.614 [job1] 00:18:42.614 filename=/dev/nvme0n2 00:18:42.614 [job2] 00:18:42.614 filename=/dev/nvme0n3 00:18:42.614 [job3] 00:18:42.614 filename=/dev/nvme0n4 00:18:42.614 Could not set queue depth (nvme0n1) 00:18:42.614 Could not set queue depth (nvme0n2) 00:18:42.614 Could not set queue depth (nvme0n3) 00:18:42.614 Could not set queue depth (nvme0n4) 00:18:42.883 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:42.883 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:42.883 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:42.883 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:42.883 fio-3.35 00:18:42.883 Starting 4 threads 00:18:44.391 00:18:44.391 job0: (groupid=0, jobs=1): err= 0: pid=1056981: Tue Jun 11 08:11:14 2024 00:18:44.391 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:18:44.391 slat (nsec): min=866, max=12848k, avg=95804.94, stdev=720838.14 00:18:44.391 clat (usec): min=1582, max=65429, avg=12591.41, stdev=7619.92 00:18:44.391 lat (usec): min=1590, max=65435, avg=12687.22, stdev=7689.40 00:18:44.391 clat percentiles (usec): 00:18:44.391 | 1.00th=[ 2278], 5.00th=[ 2999], 10.00th=[ 3654], 20.00th=[ 7046], 00:18:44.391 | 30.00th=[ 8848], 40.00th=[10028], 50.00th=[11600], 60.00th=[12649], 00:18:44.391 | 70.00th=[15401], 80.00th=[17695], 90.00th=[21103], 95.00th=[23462], 00:18:44.391 | 99.00th=[43254], 99.50th=[54789], 99.90th=[65274], 99.95th=[65274], 00:18:44.391 | 99.99th=[65274] 00:18:44.391 write: IOPS=4520, BW=17.7MiB/s (18.5MB/s)(17.7MiB/1004msec); 0 zone resets 00:18:44.391 slat (nsec): min=1619, max=16571k, avg=116153.64, stdev=735818.14 00:18:44.391 clat (usec): min=593, max=65427, avg=16697.07, stdev=14833.75 00:18:44.391 lat (usec): min=655, max=65438, avg=16813.23, stdev=14927.20 00:18:44.391 clat percentiles (usec): 00:18:44.391 | 1.00th=[ 1221], 5.00th=[ 2442], 10.00th=[ 3392], 20.00th=[ 6849], 00:18:44.391 | 30.00th=[ 7832], 40.00th=[ 9372], 50.00th=[11469], 60.00th=[14615], 00:18:44.391 | 70.00th=[18220], 80.00th=[21103], 90.00th=[44827], 95.00th=[54264], 00:18:44.391 | 99.00th=[57410], 99.50th=[59507], 99.90th=[62129], 99.95th=[62129], 00:18:44.391 | 99.99th=[65274] 00:18:44.391 bw ( KiB/s): min=17008, max=18288, per=26.01%, avg=17648.00, stdev=905.10, samples=2 00:18:44.392 iops : min= 4252, max= 4572, avg=4412.00, stdev=226.27, samples=2 00:18:44.392 lat (usec) : 750=0.07% 00:18:44.392 lat (msec) : 2=1.92%, 4=10.67%, 10=29.06%, 20=40.93%, 50=13.09% 00:18:44.392 lat (msec) : 100=4.27% 00:18:44.392 cpu : usr=3.19%, sys=5.08%, ctx=358, majf=0, minf=1 00:18:44.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:44.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:44.392 issued rwts: total=4096,4539,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.392 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:44.392 job1: (groupid=0, jobs=1): err= 0: pid=1056982: Tue Jun 11 08:11:14 2024 00:18:44.392 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:18:44.392 slat (nsec): min=879, max=34534k, avg=242012.29, stdev=1869536.51 00:18:44.392 clat (msec): min=5, max=139, avg=29.06, stdev=27.11 00:18:44.392 lat (msec): min=5, max=139, avg=29.31, stdev=27.37 00:18:44.392 clat percentiles (msec): 00:18:44.392 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:18:44.392 | 30.00th=[ 9], 40.00th=[ 13], 50.00th=[ 16], 60.00th=[ 22], 00:18:44.392 | 70.00th=[ 42], 80.00th=[ 46], 90.00th=[ 73], 95.00th=[ 89], 00:18:44.392 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 122], 99.95th=[ 122], 00:18:44.392 | 99.99th=[ 140] 00:18:44.392 write: IOPS=2302, BW=9212KiB/s (9433kB/s)(9276KiB/1007msec); 0 zone resets 00:18:44.392 slat (nsec): min=1485, max=16523k, avg=214827.93, stdev=1092674.71 00:18:44.392 clat (usec): min=609, max=121433, avg=29315.52, stdev=23059.29 00:18:44.392 lat (msec): min=4, max=121, avg=29.53, stdev=23.20 00:18:44.392 clat percentiles (msec): 00:18:44.392 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 11], 00:18:44.392 | 30.00th=[ 14], 40.00th=[ 20], 50.00th=[ 23], 60.00th=[ 27], 00:18:44.392 | 70.00th=[ 33], 80.00th=[ 47], 90.00th=[ 62], 95.00th=[ 86], 00:18:44.392 | 99.00th=[ 101], 99.50th=[ 103], 99.90th=[ 104], 99.95th=[ 104], 00:18:44.392 | 99.99th=[ 122] 00:18:44.392 bw ( KiB/s): min= 7840, max= 9688, per=12.92%, avg=8764.00, stdev=1306.73, samples=2 00:18:44.392 iops : min= 1960, max= 2422, avg=2191.00, stdev=326.68, samples=2 00:18:44.392 lat (usec) : 750=0.02% 00:18:44.392 lat (msec) : 10=26.70%, 20=23.49%, 50=32.10%, 100=16.12%, 250=1.56% 00:18:44.392 cpu : usr=1.59%, sys=2.49%, ctx=249, majf=0, minf=1 00:18:44.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:44.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:44.392 issued rwts: total=2048,2319,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.392 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:44.392 job2: (groupid=0, jobs=1): err= 0: pid=1056983: Tue Jun 11 08:11:14 2024 00:18:44.392 read: IOPS=6111, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec) 00:18:44.392 slat (nsec): min=900, max=21576k, avg=88214.82, stdev=755265.17 00:18:44.392 clat (usec): min=1902, max=54625, avg=11664.50, stdev=7377.75 00:18:44.392 lat (usec): min=1913, max=54652, avg=11752.71, stdev=7441.92 00:18:44.392 clat percentiles (usec): 00:18:44.392 | 1.00th=[ 4293], 5.00th=[ 5407], 10.00th=[ 5997], 20.00th=[ 6783], 00:18:44.392 | 30.00th=[ 7439], 40.00th=[ 8029], 50.00th=[ 9241], 60.00th=[10421], 00:18:44.392 | 70.00th=[12256], 80.00th=[15139], 90.00th=[20317], 95.00th=[30278], 00:18:44.392 | 99.00th=[39584], 99.50th=[41681], 99.90th=[41681], 99.95th=[42730], 00:18:44.392 | 99.99th=[54789] 00:18:44.392 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:18:44.392 slat (nsec): min=1635, max=12983k, avg=67811.94, stdev=515238.57 00:18:44.392 clat (usec): min=1214, max=36179, avg=9012.54, stdev=4884.67 00:18:44.392 lat (usec): min=1221, max=36181, avg=9080.35, stdev=4925.00 00:18:44.392 clat percentiles (usec): 00:18:44.392 | 1.00th=[ 3392], 5.00th=[ 4490], 10.00th=[ 5014], 20.00th=[ 5604], 00:18:44.392 | 30.00th=[ 5800], 40.00th=[ 6325], 50.00th=[ 7767], 60.00th=[ 8455], 00:18:44.392 | 70.00th=[ 9241], 80.00th=[11994], 90.00th=[15926], 95.00th=[19792], 00:18:44.392 | 99.00th=[25560], 99.50th=[27657], 99.90th=[31327], 99.95th=[31327], 00:18:44.392 | 99.99th=[36439] 00:18:44.392 bw ( KiB/s): min=24576, max=24576, per=36.22%, avg=24576.00, stdev= 0.00, samples=2 00:18:44.392 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:18:44.392 lat (msec) : 2=0.10%, 4=1.95%, 10=63.39%, 20=27.03%, 50=7.53% 00:18:44.392 lat (msec) : 100=0.01% 00:18:44.392 cpu : usr=4.48%, sys=7.37%, ctx=314, majf=0, minf=1 00:18:44.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:18:44.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:44.392 issued rwts: total=6142,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.392 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:44.392 job3: (groupid=0, jobs=1): err= 0: pid=1056984: Tue Jun 11 08:11:14 2024 00:18:44.392 read: IOPS=3840, BW=15.0MiB/s (15.7MB/s)(15.1MiB/1008msec) 00:18:44.392 slat (nsec): min=929, max=17472k, avg=122504.65, stdev=883659.36 00:18:44.392 clat (usec): min=2736, max=83368, avg=14946.53, stdev=10079.94 00:18:44.392 lat (usec): min=2776, max=83375, avg=15069.04, stdev=10170.96 00:18:44.392 clat percentiles (usec): 00:18:44.392 | 1.00th=[ 4686], 5.00th=[ 6259], 10.00th=[ 7832], 20.00th=[ 8356], 00:18:44.392 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[11600], 60.00th=[13435], 00:18:44.392 | 70.00th=[16581], 80.00th=[18744], 90.00th=[26608], 95.00th=[30802], 00:18:44.392 | 99.00th=[61080], 99.50th=[79168], 99.90th=[83362], 99.95th=[83362], 00:18:44.392 | 99.99th=[83362] 00:18:44.392 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:18:44.392 slat (nsec): min=1636, max=11824k, avg=115598.90, stdev=626709.78 00:18:44.392 clat (usec): min=389, max=83366, avg=16947.62, stdev=15724.18 00:18:44.392 lat (usec): min=422, max=83379, avg=17063.22, stdev=15825.00 00:18:44.392 clat percentiles (usec): 00:18:44.392 | 1.00th=[ 906], 5.00th=[ 2999], 10.00th=[ 4359], 20.00th=[ 6325], 00:18:44.392 | 30.00th=[ 7701], 40.00th=[ 8848], 50.00th=[10290], 60.00th=[13829], 00:18:44.392 | 70.00th=[18744], 80.00th=[22938], 90.00th=[47449], 95.00th=[55313], 00:18:44.392 | 99.00th=[63701], 99.50th=[67634], 99.90th=[71828], 99.95th=[71828], 00:18:44.392 | 99.99th=[83362] 00:18:44.392 bw ( KiB/s): min=15184, max=17584, per=24.15%, avg=16384.00, stdev=1697.06, samples=2 00:18:44.392 iops : min= 3796, max= 4396, avg=4096.00, stdev=424.26, samples=2 00:18:44.392 lat (usec) : 500=0.01%, 750=0.28%, 1000=0.40% 00:18:44.392 lat (msec) : 2=1.27%, 4=2.33%, 10=37.58%, 20=36.98%, 50=15.95% 00:18:44.392 lat (msec) : 100=5.20% 00:18:44.392 cpu : usr=2.88%, sys=4.87%, ctx=356, majf=0, minf=1 00:18:44.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:44.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:44.392 issued rwts: total=3871,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.392 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:44.392 00:18:44.392 Run status group 0 (all jobs): 00:18:44.392 READ: bw=62.6MiB/s (65.7MB/s), 8135KiB/s-23.9MiB/s (8330kB/s-25.0MB/s), io=63.1MiB (66.2MB), run=1004-1008msec 00:18:44.392 WRITE: bw=66.3MiB/s (69.5MB/s), 9212KiB/s-23.9MiB/s (9433kB/s-25.0MB/s), io=66.8MiB (70.0MB), run=1004-1008msec 00:18:44.392 00:18:44.392 Disk stats (read/write): 00:18:44.392 nvme0n1: ios=3114/3079, merge=0/0, ticks=38051/56210, in_queue=94261, util=81.26% 00:18:44.392 nvme0n2: ios=1044/1167, merge=0/0, ticks=17005/14671, in_queue=31676, util=80.71% 00:18:44.392 nvme0n3: ios=5083/5120, merge=0/0, ticks=49871/42649, in_queue=92520, util=100.00% 00:18:44.392 nvme0n4: ios=2591/2599, merge=0/0, ticks=45873/47149, in_queue=93022, util=95.75% 00:18:44.392 08:11:14 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:44.392 [global] 00:18:44.392 thread=1 00:18:44.392 invalidate=1 00:18:44.392 rw=randwrite 00:18:44.392 time_based=1 00:18:44.392 runtime=1 00:18:44.392 ioengine=libaio 00:18:44.392 direct=1 00:18:44.392 bs=4096 00:18:44.392 iodepth=128 00:18:44.392 norandommap=0 00:18:44.392 numjobs=1 00:18:44.392 00:18:44.392 verify_dump=1 00:18:44.392 verify_backlog=512 00:18:44.392 verify_state_save=0 00:18:44.392 do_verify=1 00:18:44.392 verify=crc32c-intel 00:18:44.392 [job0] 00:18:44.392 filename=/dev/nvme0n1 00:18:44.392 [job1] 00:18:44.392 filename=/dev/nvme0n2 00:18:44.392 [job2] 00:18:44.392 filename=/dev/nvme0n3 00:18:44.392 [job3] 00:18:44.392 filename=/dev/nvme0n4 00:18:44.392 Could not set queue depth (nvme0n1) 00:18:44.392 Could not set queue depth (nvme0n2) 00:18:44.392 Could not set queue depth (nvme0n3) 00:18:44.392 Could not set queue depth (nvme0n4) 00:18:44.659 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:44.659 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:44.659 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:44.659 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:44.659 fio-3.35 00:18:44.659 Starting 4 threads 00:18:46.043 00:18:46.043 job0: (groupid=0, jobs=1): err= 0: pid=1057502: Tue Jun 11 08:11:16 2024 00:18:46.043 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:18:46.043 slat (nsec): min=920, max=19331k, avg=116000.29, stdev=877119.40 00:18:46.043 clat (usec): min=6222, max=73210, avg=16310.46, stdev=12569.84 00:18:46.043 lat (usec): min=6232, max=82805, avg=16426.46, stdev=12667.23 00:18:46.043 clat percentiles (usec): 00:18:46.043 | 1.00th=[ 6390], 5.00th=[ 6980], 10.00th=[ 7242], 20.00th=[ 8586], 00:18:46.043 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[11994], 60.00th=[13042], 00:18:46.043 | 70.00th=[13435], 80.00th=[17695], 90.00th=[39060], 95.00th=[42730], 00:18:46.043 | 99.00th=[63177], 99.50th=[69731], 99.90th=[72877], 99.95th=[72877], 00:18:46.043 | 99.99th=[72877] 00:18:46.043 write: IOPS=3232, BW=12.6MiB/s (13.2MB/s)(12.7MiB/1008msec); 0 zone resets 00:18:46.043 slat (nsec): min=1573, max=16027k, avg=193965.45, stdev=960458.52 00:18:46.043 clat (usec): min=2752, max=93127, avg=23658.25, stdev=22353.71 00:18:46.043 lat (usec): min=4235, max=93128, avg=23852.21, stdev=22493.96 00:18:46.043 clat percentiles (usec): 00:18:46.043 | 1.00th=[ 5276], 5.00th=[ 6849], 10.00th=[ 7111], 20.00th=[ 9110], 00:18:46.043 | 30.00th=[11600], 40.00th=[12125], 50.00th=[13304], 60.00th=[14353], 00:18:46.043 | 70.00th=[16450], 80.00th=[45351], 90.00th=[59507], 95.00th=[77071], 00:18:46.043 | 99.00th=[86508], 99.50th=[87557], 99.90th=[92799], 99.95th=[92799], 00:18:46.043 | 99.99th=[92799] 00:18:46.043 bw ( KiB/s): min= 4560, max=20480, per=13.17%, avg=12520.00, stdev=11257.14, samples=2 00:18:46.043 iops : min= 1140, max= 5120, avg=3130.00, stdev=2814.28, samples=2 00:18:46.043 lat (msec) : 4=0.02%, 10=29.10%, 20=48.25%, 50=12.20%, 100=10.44% 00:18:46.043 cpu : usr=1.59%, sys=3.48%, ctx=360, majf=0, minf=1 00:18:46.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:46.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:46.043 issued rwts: total=3072,3258,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:46.043 job1: (groupid=0, jobs=1): err= 0: pid=1057503: Tue Jun 11 08:11:16 2024 00:18:46.043 read: IOPS=7691, BW=30.0MiB/s (31.5MB/s)(30.3MiB/1008msec) 00:18:46.043 slat (nsec): min=853, max=9837.0k, avg=63226.67, stdev=523842.01 00:18:46.043 clat (usec): min=981, max=26846, avg=8815.77, stdev=3344.94 00:18:46.043 lat (usec): min=1005, max=26853, avg=8879.00, stdev=3370.59 00:18:46.043 clat percentiles (usec): 00:18:46.043 | 1.00th=[ 2606], 5.00th=[ 5080], 10.00th=[ 5866], 20.00th=[ 6456], 00:18:46.043 | 30.00th=[ 6783], 40.00th=[ 7177], 50.00th=[ 7832], 60.00th=[ 8848], 00:18:46.043 | 70.00th=[ 9896], 80.00th=[10945], 90.00th=[13435], 95.00th=[15533], 00:18:46.043 | 99.00th=[19006], 99.50th=[19268], 99.90th=[24773], 99.95th=[26870], 00:18:46.043 | 99.99th=[26870] 00:18:46.043 write: IOPS=8126, BW=31.7MiB/s (33.3MB/s)(32.0MiB/1008msec); 0 zone resets 00:18:46.043 slat (nsec): min=1436, max=10087k, avg=51783.45, stdev=458317.96 00:18:46.043 clat (usec): min=725, max=18245, avg=7266.36, stdev=2911.12 00:18:46.043 lat (usec): min=929, max=18260, avg=7318.14, stdev=2933.64 00:18:46.043 clat percentiles (usec): 00:18:46.043 | 1.00th=[ 1795], 5.00th=[ 3261], 10.00th=[ 4080], 20.00th=[ 5014], 00:18:46.043 | 30.00th=[ 5538], 40.00th=[ 6128], 50.00th=[ 6718], 60.00th=[ 7701], 00:18:46.043 | 70.00th=[ 8356], 80.00th=[ 9634], 90.00th=[11338], 95.00th=[12780], 00:18:46.043 | 99.00th=[16319], 99.50th=[17957], 99.90th=[17957], 99.95th=[17957], 00:18:46.043 | 99.99th=[18220] 00:18:46.043 bw ( KiB/s): min=32216, max=32880, per=34.24%, avg=32548.00, stdev=469.52, samples=2 00:18:46.043 iops : min= 8054, max= 8220, avg=8137.00, stdev=117.38, samples=2 00:18:46.043 lat (usec) : 750=0.01%, 1000=0.03% 00:18:46.043 lat (msec) : 2=0.87%, 4=5.09%, 10=72.63%, 20=21.20%, 50=0.18% 00:18:46.043 cpu : usr=5.36%, sys=8.54%, ctx=386, majf=0, minf=1 00:18:46.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:46.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:46.043 issued rwts: total=7753,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:46.043 job2: (groupid=0, jobs=1): err= 0: pid=1057510: Tue Jun 11 08:11:16 2024 00:18:46.043 read: IOPS=5749, BW=22.5MiB/s (23.5MB/s)(22.6MiB/1008msec) 00:18:46.043 slat (nsec): min=903, max=12361k, avg=86302.89, stdev=663158.69 00:18:46.043 clat (usec): min=1826, max=37951, avg=11084.52, stdev=4961.79 00:18:46.043 lat (usec): min=1831, max=37974, avg=11170.83, stdev=5005.31 00:18:46.043 clat percentiles (usec): 00:18:46.043 | 1.00th=[ 3294], 5.00th=[ 4752], 10.00th=[ 5932], 20.00th=[ 7635], 00:18:46.043 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 9372], 60.00th=[10814], 00:18:46.043 | 70.00th=[13173], 80.00th=[15795], 90.00th=[17957], 95.00th=[19792], 00:18:46.043 | 99.00th=[28181], 99.50th=[28181], 99.90th=[28967], 99.95th=[29492], 00:18:46.043 | 99.99th=[38011] 00:18:46.043 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:18:46.043 slat (nsec): min=1506, max=9511.4k, avg=69130.45, stdev=476759.89 00:18:46.044 clat (usec): min=620, max=44771, avg=10361.92, stdev=5223.37 00:18:46.044 lat (usec): min=628, max=44774, avg=10431.05, stdev=5245.92 00:18:46.044 clat percentiles (usec): 00:18:46.044 | 1.00th=[ 1893], 5.00th=[ 4228], 10.00th=[ 5145], 20.00th=[ 6587], 00:18:46.044 | 30.00th=[ 7504], 40.00th=[ 8225], 50.00th=[ 8979], 60.00th=[10159], 00:18:46.044 | 70.00th=[11994], 80.00th=[14877], 90.00th=[16450], 95.00th=[17695], 00:18:46.044 | 99.00th=[33424], 99.50th=[37487], 99.90th=[41157], 99.95th=[43254], 00:18:46.044 | 99.99th=[44827] 00:18:46.044 bw ( KiB/s): min=20521, max=28672, per=25.87%, avg=24596.50, stdev=5763.63, samples=2 00:18:46.044 iops : min= 5130, max= 7168, avg=6149.00, stdev=1441.08, samples=2 00:18:46.044 lat (usec) : 750=0.03% 00:18:46.044 lat (msec) : 2=0.69%, 4=2.12%, 10=55.77%, 20=37.98%, 50=3.42% 00:18:46.044 cpu : usr=3.18%, sys=6.85%, ctx=414, majf=0, minf=1 00:18:46.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:18:46.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:46.044 issued rwts: total=5795,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:46.044 job3: (groupid=0, jobs=1): err= 0: pid=1057511: Tue Jun 11 08:11:16 2024 00:18:46.044 read: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec) 00:18:46.044 slat (nsec): min=974, max=10627k, avg=83455.98, stdev=631756.40 00:18:46.044 clat (usec): min=4042, max=21800, avg=10652.31, stdev=2555.18 00:18:46.044 lat (usec): min=4051, max=21806, avg=10735.77, stdev=2599.13 00:18:46.044 clat percentiles (usec): 00:18:46.044 | 1.00th=[ 5932], 5.00th=[ 6915], 10.00th=[ 8291], 20.00th=[ 8979], 00:18:46.044 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10552], 00:18:46.044 | 70.00th=[11076], 80.00th=[11600], 90.00th=[14746], 95.00th=[15664], 00:18:46.044 | 99.00th=[18744], 99.50th=[20055], 99.90th=[21103], 99.95th=[21103], 00:18:46.044 | 99.99th=[21890] 00:18:46.044 write: IOPS=6310, BW=24.7MiB/s (25.8MB/s)(24.8MiB/1008msec); 0 zone resets 00:18:46.044 slat (nsec): min=1630, max=9039.8k, avg=71824.42, stdev=540639.75 00:18:46.044 clat (usec): min=1329, max=21051, avg=9741.84, stdev=2543.17 00:18:46.044 lat (usec): min=2785, max=21057, avg=9813.66, stdev=2578.45 00:18:46.044 clat percentiles (usec): 00:18:46.044 | 1.00th=[ 3818], 5.00th=[ 5669], 10.00th=[ 6390], 20.00th=[ 7439], 00:18:46.044 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[10290], 00:18:46.044 | 70.00th=[10683], 80.00th=[11469], 90.00th=[13042], 95.00th=[14353], 00:18:46.044 | 99.00th=[16712], 99.50th=[17171], 99.90th=[19006], 99.95th=[20841], 00:18:46.044 | 99.99th=[21103] 00:18:46.044 bw ( KiB/s): min=24576, max=25288, per=26.23%, avg=24932.00, stdev=503.46, samples=2 00:18:46.044 iops : min= 6144, max= 6322, avg=6233.00, stdev=125.87, samples=2 00:18:46.044 lat (msec) : 2=0.01%, 4=0.67%, 10=47.37%, 20=51.69%, 50=0.26% 00:18:46.044 cpu : usr=4.87%, sys=6.06%, ctx=431, majf=0, minf=1 00:18:46.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:18:46.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:46.044 issued rwts: total=6144,6361,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:46.044 00:18:46.044 Run status group 0 (all jobs): 00:18:46.044 READ: bw=88.2MiB/s (92.5MB/s), 11.9MiB/s-30.0MiB/s (12.5MB/s-31.5MB/s), io=88.9MiB (93.2MB), run=1008-1008msec 00:18:46.044 WRITE: bw=92.8MiB/s (97.3MB/s), 12.6MiB/s-31.7MiB/s (13.2MB/s-33.3MB/s), io=93.6MiB (98.1MB), run=1008-1008msec 00:18:46.044 00:18:46.044 Disk stats (read/write): 00:18:46.044 nvme0n1: ios=2941/3072, merge=0/0, ticks=14034/25448, in_queue=39482, util=99.70% 00:18:46.044 nvme0n2: ios=6695/7101, merge=0/0, ticks=54016/45100, in_queue=99116, util=88.38% 00:18:46.044 nvme0n3: ios=4813/5066, merge=0/0, ticks=34327/33248, in_queue=67575, util=96.84% 00:18:46.044 nvme0n4: ios=5134/5127, merge=0/0, ticks=53548/48638, in_queue=102186, util=97.44% 00:18:46.044 08:11:16 -- target/fio.sh@55 -- # sync 00:18:46.044 08:11:16 -- target/fio.sh@59 -- # fio_pid=1057850 00:18:46.044 08:11:16 -- target/fio.sh@61 -- # sleep 3 00:18:46.044 08:11:16 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:46.044 [global] 00:18:46.044 thread=1 00:18:46.044 invalidate=1 00:18:46.044 rw=read 00:18:46.044 time_based=1 00:18:46.044 runtime=10 00:18:46.044 ioengine=libaio 00:18:46.044 direct=1 00:18:46.044 bs=4096 00:18:46.044 iodepth=1 00:18:46.044 norandommap=1 00:18:46.044 numjobs=1 00:18:46.044 00:18:46.044 [job0] 00:18:46.044 filename=/dev/nvme0n1 00:18:46.044 [job1] 00:18:46.044 filename=/dev/nvme0n2 00:18:46.044 [job2] 00:18:46.044 filename=/dev/nvme0n3 00:18:46.044 [job3] 00:18:46.044 filename=/dev/nvme0n4 00:18:46.044 Could not set queue depth (nvme0n1) 00:18:46.044 Could not set queue depth (nvme0n2) 00:18:46.044 Could not set queue depth (nvme0n3) 00:18:46.044 Could not set queue depth (nvme0n4) 00:18:46.305 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:46.305 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:46.305 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:46.305 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:46.305 fio-3.35 00:18:46.305 Starting 4 threads 00:18:48.851 08:11:19 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:48.851 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=8687616, buflen=4096 00:18:48.851 fio: pid=1058040, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:48.851 08:11:19 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:49.115 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=13225984, buflen=4096 00:18:49.115 fio: pid=1058039, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:49.115 08:11:19 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:49.115 08:11:19 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:49.115 08:11:19 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:49.115 08:11:19 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:49.376 fio: io_u error on file /dev/nvme0n1: Input/output error: read offset=3956736, buflen=4096 00:18:49.376 fio: pid=1058037, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:18:49.376 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=9658368, buflen=4096 00:18:49.376 fio: pid=1058038, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:18:49.376 08:11:19 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:49.376 08:11:19 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:49.376 00:18:49.376 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1058037: Tue Jun 11 08:11:19 2024 00:18:49.376 read: IOPS=331, BW=1324KiB/s (1356kB/s)(3864KiB/2918msec) 00:18:49.376 slat (usec): min=6, max=12867, avg=51.21, stdev=565.74 00:18:49.376 clat (usec): min=482, max=42662, avg=2962.78, stdev=8741.70 00:18:49.376 lat (usec): min=507, max=42688, avg=3014.01, stdev=8754.00 00:18:49.376 clat percentiles (usec): 00:18:49.376 | 1.00th=[ 635], 5.00th=[ 816], 10.00th=[ 857], 20.00th=[ 906], 00:18:49.376 | 30.00th=[ 947], 40.00th=[ 979], 50.00th=[ 1012], 60.00th=[ 1045], 00:18:49.376 | 70.00th=[ 1090], 80.00th=[ 1139], 90.00th=[ 1221], 95.00th=[ 1418], 00:18:49.376 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:18:49.376 | 99.99th=[42730] 00:18:49.376 bw ( KiB/s): min= 336, max= 2560, per=12.73%, avg=1438.40, stdev=1003.97, samples=5 00:18:49.376 iops : min= 84, max= 640, avg=359.60, stdev=250.99, samples=5 00:18:49.376 lat (usec) : 500=0.10%, 750=2.48%, 1000=43.43% 00:18:49.376 lat (msec) : 2=48.91%, 4=0.21%, 50=4.76% 00:18:49.376 cpu : usr=0.58%, sys=1.27%, ctx=970, majf=0, minf=1 00:18:49.376 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:49.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.376 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.376 issued rwts: total=967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.376 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:49.376 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1058038: Tue Jun 11 08:11:19 2024 00:18:49.376 read: IOPS=767, BW=3070KiB/s (3144kB/s)(9432KiB/3072msec) 00:18:49.376 slat (usec): min=5, max=11595, avg=34.46, stdev=274.96 00:18:49.376 clat (usec): min=501, max=42826, avg=1261.86, stdev=3473.52 00:18:49.376 lat (usec): min=527, max=53839, avg=1296.31, stdev=3586.91 00:18:49.376 clat percentiles (usec): 00:18:49.376 | 1.00th=[ 701], 5.00th=[ 791], 10.00th=[ 840], 20.00th=[ 898], 00:18:49.376 | 30.00th=[ 938], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 996], 00:18:49.376 | 70.00th=[ 1012], 80.00th=[ 1037], 90.00th=[ 1074], 95.00th=[ 1106], 00:18:49.376 | 99.00th=[ 1188], 99.50th=[41681], 99.90th=[42206], 99.95th=[42730], 00:18:49.376 | 99.99th=[42730] 00:18:49.376 bw ( KiB/s): min= 3632, max= 4040, per=33.20%, avg=3750.40, stdev=164.28, samples=5 00:18:49.376 iops : min= 908, max= 1010, avg=937.60, stdev=41.07, samples=5 00:18:49.376 lat (usec) : 750=2.46%, 1000=58.84% 00:18:49.376 lat (msec) : 2=37.94%, 50=0.72% 00:18:49.376 cpu : usr=1.43%, sys=2.93%, ctx=2361, majf=0, minf=1 00:18:49.376 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:49.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.377 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.377 issued rwts: total=2359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:49.377 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1058039: Tue Jun 11 08:11:19 2024 00:18:49.377 read: IOPS=1185, BW=4740KiB/s (4854kB/s)(12.6MiB/2725msec) 00:18:49.377 slat (usec): min=6, max=9640, avg=28.56, stdev=210.77 00:18:49.377 clat (usec): min=173, max=1397, avg=809.20, stdev=145.06 00:18:49.377 lat (usec): min=180, max=10488, avg=837.76, stdev=256.50 00:18:49.377 clat percentiles (usec): 00:18:49.377 | 1.00th=[ 343], 5.00th=[ 537], 10.00th=[ 627], 20.00th=[ 701], 00:18:49.377 | 30.00th=[ 758], 40.00th=[ 799], 50.00th=[ 832], 60.00th=[ 873], 00:18:49.377 | 70.00th=[ 898], 80.00th=[ 922], 90.00th=[ 963], 95.00th=[ 988], 00:18:49.377 | 99.00th=[ 1090], 99.50th=[ 1172], 99.90th=[ 1254], 99.95th=[ 1336], 00:18:49.377 | 99.99th=[ 1401] 00:18:49.377 bw ( KiB/s): min= 4624, max= 4760, per=41.54%, avg=4692.80, stdev=54.44, samples=5 00:18:49.377 iops : min= 1156, max= 1190, avg=1173.20, stdev=13.61, samples=5 00:18:49.377 lat (usec) : 250=0.25%, 500=3.50%, 750=24.61%, 1000=67.93% 00:18:49.377 lat (msec) : 2=3.68% 00:18:49.377 cpu : usr=1.21%, sys=3.27%, ctx=3232, majf=0, minf=1 00:18:49.377 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:49.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.377 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.377 issued rwts: total=3230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:49.377 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1058040: Tue Jun 11 08:11:19 2024 00:18:49.377 read: IOPS=824, BW=3297KiB/s (3376kB/s)(8484KiB/2573msec) 00:18:49.377 slat (nsec): min=6630, max=57080, avg=24598.93, stdev=3026.35 00:18:49.377 clat (usec): min=540, max=42248, avg=1181.57, stdev=1528.04 00:18:49.377 lat (usec): min=564, max=42273, avg=1206.17, stdev=1528.03 00:18:49.377 clat percentiles (usec): 00:18:49.377 | 1.00th=[ 824], 5.00th=[ 930], 10.00th=[ 988], 20.00th=[ 1037], 00:18:49.377 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1156], 00:18:49.377 | 70.00th=[ 1188], 80.00th=[ 1221], 90.00th=[ 1270], 95.00th=[ 1303], 00:18:49.377 | 99.00th=[ 1352], 99.50th=[ 1369], 99.90th=[41157], 99.95th=[41681], 00:18:49.377 | 99.99th=[42206] 00:18:49.377 bw ( KiB/s): min= 2880, max= 3488, per=29.24%, avg=3302.40, stdev=257.46, samples=5 00:18:49.377 iops : min= 720, max= 872, avg=825.60, stdev=64.36, samples=5 00:18:49.377 lat (usec) : 750=0.33%, 1000=12.02% 00:18:49.377 lat (msec) : 2=87.46%, 50=0.14% 00:18:49.377 cpu : usr=1.13%, sys=2.18%, ctx=2122, majf=0, minf=2 00:18:49.377 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:49.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.377 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.377 issued rwts: total=2122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.377 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:49.377 00:18:49.377 Run status group 0 (all jobs): 00:18:49.377 READ: bw=11.0MiB/s (11.6MB/s), 1324KiB/s-4740KiB/s (1356kB/s-4854kB/s), io=33.9MiB (35.5MB), run=2573-3072msec 00:18:49.377 00:18:49.377 Disk stats (read/write): 00:18:49.377 nvme0n1: ios=958/0, merge=0/0, ticks=2660/0, in_queue=2660, util=93.99% 00:18:49.377 nvme0n2: ios=2352/0, merge=0/0, ticks=2676/0, in_queue=2676, util=94.96% 00:18:49.377 nvme0n3: ios=3038/0, merge=0/0, ticks=2415/0, in_queue=2415, util=96.03% 00:18:49.377 nvme0n4: ios=1924/0, merge=0/0, ticks=2180/0, in_queue=2180, util=96.06% 00:18:49.637 08:11:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:49.637 08:11:20 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:49.637 08:11:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:49.637 08:11:20 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:49.898 08:11:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:49.898 08:11:20 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:50.158 08:11:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:50.158 08:11:20 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:50.158 08:11:20 -- target/fio.sh@69 -- # fio_status=0 00:18:50.158 08:11:20 -- target/fio.sh@70 -- # wait 1057850 00:18:50.158 08:11:20 -- target/fio.sh@70 -- # fio_status=4 00:18:50.158 08:11:20 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:50.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:50.420 08:11:20 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:50.420 08:11:20 -- common/autotest_common.sh@1198 -- # local i=0 00:18:50.420 08:11:20 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:50.420 08:11:20 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:50.420 08:11:20 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:50.420 08:11:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:50.420 08:11:20 -- common/autotest_common.sh@1210 -- # return 0 00:18:50.420 08:11:20 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:50.420 08:11:20 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:50.420 nvmf hotplug test: fio failed as expected 00:18:50.420 08:11:20 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:50.420 08:11:21 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:50.420 08:11:21 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:50.420 08:11:21 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:50.420 08:11:21 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:50.420 08:11:21 -- target/fio.sh@91 -- # nvmftestfini 00:18:50.420 08:11:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:50.420 08:11:21 -- nvmf/common.sh@116 -- # sync 00:18:50.420 08:11:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:50.420 08:11:21 -- nvmf/common.sh@119 -- # set +e 00:18:50.420 08:11:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:50.420 08:11:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:50.420 rmmod nvme_tcp 00:18:50.420 rmmod nvme_fabrics 00:18:50.681 rmmod nvme_keyring 00:18:50.681 08:11:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:50.681 08:11:21 -- nvmf/common.sh@123 -- # set -e 00:18:50.681 08:11:21 -- nvmf/common.sh@124 -- # return 0 00:18:50.681 08:11:21 -- nvmf/common.sh@477 -- # '[' -n 1054307 ']' 00:18:50.681 08:11:21 -- nvmf/common.sh@478 -- # killprocess 1054307 00:18:50.681 08:11:21 -- common/autotest_common.sh@926 -- # '[' -z 1054307 ']' 00:18:50.681 08:11:21 -- common/autotest_common.sh@930 -- # kill -0 1054307 00:18:50.681 08:11:21 -- common/autotest_common.sh@931 -- # uname 00:18:50.681 08:11:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:50.681 08:11:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1054307 00:18:50.681 08:11:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:50.681 08:11:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:50.681 08:11:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1054307' 00:18:50.681 killing process with pid 1054307 00:18:50.681 08:11:21 -- common/autotest_common.sh@945 -- # kill 1054307 00:18:50.681 08:11:21 -- common/autotest_common.sh@950 -- # wait 1054307 00:18:50.681 08:11:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:50.681 08:11:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:50.681 08:11:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:50.681 08:11:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:50.681 08:11:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:50.681 08:11:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.681 08:11:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.681 08:11:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.224 08:11:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:53.224 00:18:53.224 real 0m28.156s 00:18:53.224 user 2m31.583s 00:18:53.224 sys 0m8.979s 00:18:53.224 08:11:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:53.224 08:11:23 -- common/autotest_common.sh@10 -- # set +x 00:18:53.224 ************************************ 00:18:53.224 END TEST nvmf_fio_target 00:18:53.224 ************************************ 00:18:53.224 08:11:23 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:53.224 08:11:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:53.224 08:11:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:53.224 08:11:23 -- common/autotest_common.sh@10 -- # set +x 00:18:53.224 ************************************ 00:18:53.224 START TEST nvmf_bdevio 00:18:53.224 ************************************ 00:18:53.224 08:11:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:53.224 * Looking for test storage... 00:18:53.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:53.224 08:11:23 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:53.224 08:11:23 -- nvmf/common.sh@7 -- # uname -s 00:18:53.224 08:11:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.224 08:11:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.224 08:11:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.224 08:11:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.224 08:11:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.224 08:11:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.224 08:11:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.224 08:11:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.224 08:11:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.224 08:11:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.224 08:11:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:53.224 08:11:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:53.224 08:11:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.224 08:11:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.224 08:11:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:53.224 08:11:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:53.224 08:11:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.224 08:11:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.224 08:11:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.224 08:11:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.224 08:11:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.224 08:11:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.224 08:11:23 -- paths/export.sh@5 -- # export PATH 00:18:53.224 08:11:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.224 08:11:23 -- nvmf/common.sh@46 -- # : 0 00:18:53.224 08:11:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:53.224 08:11:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:53.224 08:11:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:53.224 08:11:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.224 08:11:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.224 08:11:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:53.224 08:11:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:53.224 08:11:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:53.224 08:11:23 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:53.224 08:11:23 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:53.224 08:11:23 -- target/bdevio.sh@14 -- # nvmftestinit 00:18:53.224 08:11:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:53.224 08:11:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:53.224 08:11:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:53.224 08:11:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:53.224 08:11:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:53.224 08:11:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.224 08:11:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:53.224 08:11:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.224 08:11:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:53.224 08:11:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:53.224 08:11:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:53.224 08:11:23 -- common/autotest_common.sh@10 -- # set +x 00:18:59.811 08:11:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:59.811 08:11:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:59.811 08:11:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:59.811 08:11:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:59.811 08:11:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:59.811 08:11:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:59.811 08:11:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:59.811 08:11:30 -- nvmf/common.sh@294 -- # net_devs=() 00:18:59.811 08:11:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:59.811 08:11:30 -- nvmf/common.sh@295 -- # e810=() 00:18:59.811 08:11:30 -- nvmf/common.sh@295 -- # local -ga e810 00:18:59.811 08:11:30 -- nvmf/common.sh@296 -- # x722=() 00:18:59.811 08:11:30 -- nvmf/common.sh@296 -- # local -ga x722 00:18:59.811 08:11:30 -- nvmf/common.sh@297 -- # mlx=() 00:18:59.811 08:11:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:59.811 08:11:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:59.811 08:11:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:59.811 08:11:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:59.811 08:11:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:59.811 08:11:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:59.811 08:11:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:59.811 08:11:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:59.811 08:11:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:59.811 08:11:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:59.811 08:11:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:59.811 08:11:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:59.811 08:11:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:59.811 08:11:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:59.811 08:11:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:59.811 08:11:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:59.811 08:11:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:59.811 08:11:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:59.811 08:11:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:59.811 08:11:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:59.811 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:59.811 08:11:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:59.811 08:11:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:59.811 08:11:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.811 08:11:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.811 08:11:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:59.811 08:11:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:59.811 08:11:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:59.811 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:59.811 08:11:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:59.811 08:11:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:59.811 08:11:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.811 08:11:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.811 08:11:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:59.811 08:11:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:59.811 08:11:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:59.811 08:11:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:59.811 08:11:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:59.811 08:11:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.811 08:11:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:59.811 08:11:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.811 08:11:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:59.811 Found net devices under 0000:31:00.0: cvl_0_0 00:18:59.811 08:11:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.811 08:11:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:59.811 08:11:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.811 08:11:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:59.811 08:11:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.811 08:11:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:59.811 Found net devices under 0000:31:00.1: cvl_0_1 00:18:59.811 08:11:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.811 08:11:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:59.811 08:11:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:59.811 08:11:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:59.811 08:11:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:59.811 08:11:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:59.811 08:11:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:59.811 08:11:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:59.811 08:11:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:59.811 08:11:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:59.812 08:11:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:59.812 08:11:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:59.812 08:11:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:59.812 08:11:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:59.812 08:11:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:59.812 08:11:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:59.812 08:11:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:59.812 08:11:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:59.812 08:11:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:00.073 08:11:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:00.073 08:11:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:00.073 08:11:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:00.073 08:11:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:00.073 08:11:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:00.073 08:11:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:00.073 08:11:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:00.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:19:00.073 00:19:00.073 --- 10.0.0.2 ping statistics --- 00:19:00.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.073 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:19:00.073 08:11:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:00.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:19:00.073 00:19:00.073 --- 10.0.0.1 ping statistics --- 00:19:00.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.073 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:19:00.073 08:11:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.073 08:11:30 -- nvmf/common.sh@410 -- # return 0 00:19:00.073 08:11:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:00.073 08:11:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.073 08:11:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:00.073 08:11:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:00.073 08:11:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.073 08:11:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:00.073 08:11:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:00.073 08:11:30 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:00.073 08:11:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:00.073 08:11:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:00.073 08:11:30 -- common/autotest_common.sh@10 -- # set +x 00:19:00.073 08:11:30 -- nvmf/common.sh@469 -- # nvmfpid=1063156 00:19:00.073 08:11:30 -- nvmf/common.sh@470 -- # waitforlisten 1063156 00:19:00.073 08:11:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:00.073 08:11:30 -- common/autotest_common.sh@819 -- # '[' -z 1063156 ']' 00:19:00.073 08:11:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.073 08:11:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:00.073 08:11:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.073 08:11:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:00.073 08:11:30 -- common/autotest_common.sh@10 -- # set +x 00:19:00.073 [2024-06-11 08:11:30.715737] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:00.073 [2024-06-11 08:11:30.715799] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.334 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.334 [2024-06-11 08:11:30.790981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:00.334 [2024-06-11 08:11:30.879882] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:00.334 [2024-06-11 08:11:30.880034] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.334 [2024-06-11 08:11:30.880044] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.334 [2024-06-11 08:11:30.880051] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.334 [2024-06-11 08:11:30.880222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:00.334 [2024-06-11 08:11:30.880392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:00.334 [2024-06-11 08:11:30.880550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:00.334 [2024-06-11 08:11:30.880550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:00.906 08:11:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:00.906 08:11:31 -- common/autotest_common.sh@852 -- # return 0 00:19:00.906 08:11:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:00.906 08:11:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:00.906 08:11:31 -- common/autotest_common.sh@10 -- # set +x 00:19:00.906 08:11:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.906 08:11:31 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:00.906 08:11:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:00.906 08:11:31 -- common/autotest_common.sh@10 -- # set +x 00:19:01.166 [2024-06-11 08:11:31.554872] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.166 08:11:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.166 08:11:31 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:01.166 08:11:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.166 08:11:31 -- common/autotest_common.sh@10 -- # set +x 00:19:01.166 Malloc0 00:19:01.166 08:11:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.166 08:11:31 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:01.166 08:11:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.166 08:11:31 -- common/autotest_common.sh@10 -- # set +x 00:19:01.166 08:11:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.166 08:11:31 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:01.166 08:11:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.166 08:11:31 -- common/autotest_common.sh@10 -- # set +x 00:19:01.166 08:11:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.166 08:11:31 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.166 08:11:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.166 08:11:31 -- common/autotest_common.sh@10 -- # set +x 00:19:01.166 [2024-06-11 08:11:31.619953] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.166 08:11:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.166 08:11:31 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:01.166 08:11:31 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:01.166 08:11:31 -- nvmf/common.sh@520 -- # config=() 00:19:01.166 08:11:31 -- nvmf/common.sh@520 -- # local subsystem config 00:19:01.166 08:11:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:01.166 08:11:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:01.166 { 00:19:01.166 "params": { 00:19:01.166 "name": "Nvme$subsystem", 00:19:01.166 "trtype": "$TEST_TRANSPORT", 00:19:01.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:01.166 "adrfam": "ipv4", 00:19:01.166 "trsvcid": "$NVMF_PORT", 00:19:01.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:01.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:01.166 "hdgst": ${hdgst:-false}, 00:19:01.166 "ddgst": ${ddgst:-false} 00:19:01.166 }, 00:19:01.166 "method": "bdev_nvme_attach_controller" 00:19:01.166 } 00:19:01.166 EOF 00:19:01.166 )") 00:19:01.166 08:11:31 -- nvmf/common.sh@542 -- # cat 00:19:01.166 08:11:31 -- nvmf/common.sh@544 -- # jq . 00:19:01.166 08:11:31 -- nvmf/common.sh@545 -- # IFS=, 00:19:01.166 08:11:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:01.166 "params": { 00:19:01.166 "name": "Nvme1", 00:19:01.166 "trtype": "tcp", 00:19:01.166 "traddr": "10.0.0.2", 00:19:01.166 "adrfam": "ipv4", 00:19:01.166 "trsvcid": "4420", 00:19:01.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.166 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:01.166 "hdgst": false, 00:19:01.166 "ddgst": false 00:19:01.166 }, 00:19:01.166 "method": "bdev_nvme_attach_controller" 00:19:01.166 }' 00:19:01.166 [2024-06-11 08:11:31.672512] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:01.167 [2024-06-11 08:11:31.672584] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1063414 ] 00:19:01.167 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.167 [2024-06-11 08:11:31.740489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:01.427 [2024-06-11 08:11:31.813715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.427 [2024-06-11 08:11:31.813833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.427 [2024-06-11 08:11:31.813836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.427 [2024-06-11 08:11:31.948495] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:01.427 [2024-06-11 08:11:31.948527] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:01.427 I/O targets: 00:19:01.427 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:01.427 00:19:01.427 00:19:01.427 CUnit - A unit testing framework for C - Version 2.1-3 00:19:01.427 http://cunit.sourceforge.net/ 00:19:01.427 00:19:01.427 00:19:01.427 Suite: bdevio tests on: Nvme1n1 00:19:01.427 Test: blockdev write read block ...passed 00:19:01.427 Test: blockdev write zeroes read block ...passed 00:19:01.427 Test: blockdev write zeroes read no split ...passed 00:19:01.694 Test: blockdev write zeroes read split ...passed 00:19:01.694 Test: blockdev write zeroes read split partial ...passed 00:19:01.694 Test: blockdev reset ...[2024-06-11 08:11:32.110688] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:01.694 [2024-06-11 08:11:32.110736] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb29080 (9): Bad file descriptor 00:19:01.694 [2024-06-11 08:11:32.169883] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:01.694 passed 00:19:01.694 Test: blockdev write read 8 blocks ...passed 00:19:01.694 Test: blockdev write read size > 128k ...passed 00:19:01.694 Test: blockdev write read invalid size ...passed 00:19:01.694 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:01.694 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:01.694 Test: blockdev write read max offset ...passed 00:19:01.954 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:01.954 Test: blockdev writev readv 8 blocks ...passed 00:19:01.954 Test: blockdev writev readv 30 x 1block ...passed 00:19:01.954 Test: blockdev writev readv block ...passed 00:19:01.954 Test: blockdev writev readv size > 128k ...passed 00:19:01.954 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:01.954 Test: blockdev comparev and writev ...[2024-06-11 08:11:32.437330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:01.954 [2024-06-11 08:11:32.437355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:01.954 [2024-06-11 08:11:32.437369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:01.954 [2024-06-11 08:11:32.437377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:01.954 [2024-06-11 08:11:32.437901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:01.954 [2024-06-11 08:11:32.437913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:01.954 [2024-06-11 08:11:32.437928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:01.954 [2024-06-11 08:11:32.437940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:01.954 [2024-06-11 08:11:32.438459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:01.954 [2024-06-11 08:11:32.438472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.954 [2024-06-11 08:11:32.438486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:01.954 [2024-06-11 08:11:32.438495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:01.954 [2024-06-11 08:11:32.438945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:01.954 [2024-06-11 08:11:32.438954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:01.954 [2024-06-11 08:11:32.438967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:01.954 [2024-06-11 08:11:32.438977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:01.954 passed 00:19:01.954 Test: blockdev nvme passthru rw ...passed 00:19:01.954 Test: blockdev nvme passthru vendor specific ...[2024-06-11 08:11:32.523271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:01.954 [2024-06-11 08:11:32.523282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:01.954 [2024-06-11 08:11:32.523657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:01.954 [2024-06-11 08:11:32.523666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:01.954 [2024-06-11 08:11:32.523981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:01.954 [2024-06-11 08:11:32.523989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:01.954 [2024-06-11 08:11:32.524345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:01.954 [2024-06-11 08:11:32.524353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:01.954 passed 00:19:01.954 Test: blockdev nvme admin passthru ...passed 00:19:01.954 Test: blockdev copy ...passed 00:19:01.954 00:19:01.954 Run Summary: Type Total Ran Passed Failed Inactive 00:19:01.954 suites 1 1 n/a 0 0 00:19:01.954 tests 23 23 23 0 0 00:19:01.954 asserts 152 152 152 0 n/a 00:19:01.954 00:19:01.954 Elapsed time = 1.279 seconds 00:19:02.215 08:11:32 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:02.215 08:11:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.215 08:11:32 -- common/autotest_common.sh@10 -- # set +x 00:19:02.215 08:11:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.215 08:11:32 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:02.215 08:11:32 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:02.215 08:11:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:02.215 08:11:32 -- nvmf/common.sh@116 -- # sync 00:19:02.215 08:11:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:02.215 08:11:32 -- nvmf/common.sh@119 -- # set +e 00:19:02.215 08:11:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:02.215 08:11:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:02.215 rmmod nvme_tcp 00:19:02.215 rmmod nvme_fabrics 00:19:02.215 rmmod nvme_keyring 00:19:02.215 08:11:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:02.215 08:11:32 -- nvmf/common.sh@123 -- # set -e 00:19:02.215 08:11:32 -- nvmf/common.sh@124 -- # return 0 00:19:02.215 08:11:32 -- nvmf/common.sh@477 -- # '[' -n 1063156 ']' 00:19:02.215 08:11:32 -- nvmf/common.sh@478 -- # killprocess 1063156 00:19:02.215 08:11:32 -- common/autotest_common.sh@926 -- # '[' -z 1063156 ']' 00:19:02.215 08:11:32 -- common/autotest_common.sh@930 -- # kill -0 1063156 00:19:02.215 08:11:32 -- common/autotest_common.sh@931 -- # uname 00:19:02.215 08:11:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:02.215 08:11:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1063156 00:19:02.215 08:11:32 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:02.215 08:11:32 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:02.215 08:11:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1063156' 00:19:02.215 killing process with pid 1063156 00:19:02.215 08:11:32 -- common/autotest_common.sh@945 -- # kill 1063156 00:19:02.215 08:11:32 -- common/autotest_common.sh@950 -- # wait 1063156 00:19:02.476 08:11:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:02.477 08:11:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:02.477 08:11:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:02.477 08:11:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:02.477 08:11:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:02.477 08:11:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.477 08:11:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:02.477 08:11:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.019 08:11:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:05.019 00:19:05.019 real 0m11.643s 00:19:05.019 user 0m12.463s 00:19:05.019 sys 0m5.717s 00:19:05.019 08:11:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:05.019 08:11:35 -- common/autotest_common.sh@10 -- # set +x 00:19:05.019 ************************************ 00:19:05.019 END TEST nvmf_bdevio 00:19:05.019 ************************************ 00:19:05.019 08:11:35 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:19:05.019 08:11:35 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:05.019 08:11:35 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:05.019 08:11:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:05.019 08:11:35 -- common/autotest_common.sh@10 -- # set +x 00:19:05.019 ************************************ 00:19:05.019 START TEST nvmf_bdevio_no_huge 00:19:05.019 ************************************ 00:19:05.019 08:11:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:05.019 * Looking for test storage... 00:19:05.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:05.019 08:11:35 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:05.019 08:11:35 -- nvmf/common.sh@7 -- # uname -s 00:19:05.019 08:11:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:05.019 08:11:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:05.019 08:11:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:05.019 08:11:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:05.019 08:11:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:05.019 08:11:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:05.019 08:11:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:05.019 08:11:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:05.019 08:11:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:05.019 08:11:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:05.019 08:11:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:05.019 08:11:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:05.019 08:11:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:05.019 08:11:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:05.019 08:11:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:05.019 08:11:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:05.019 08:11:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:05.019 08:11:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.019 08:11:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.019 08:11:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.019 08:11:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.019 08:11:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.019 08:11:35 -- paths/export.sh@5 -- # export PATH 00:19:05.019 08:11:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.019 08:11:35 -- nvmf/common.sh@46 -- # : 0 00:19:05.019 08:11:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:05.019 08:11:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:05.019 08:11:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:05.019 08:11:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:05.019 08:11:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:05.019 08:11:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:05.019 08:11:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:05.019 08:11:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:05.019 08:11:35 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:05.019 08:11:35 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:05.019 08:11:35 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:05.019 08:11:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:05.019 08:11:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:05.019 08:11:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:05.019 08:11:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:05.019 08:11:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:05.019 08:11:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.019 08:11:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:05.019 08:11:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.019 08:11:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:05.019 08:11:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:05.019 08:11:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:05.019 08:11:35 -- common/autotest_common.sh@10 -- # set +x 00:19:11.605 08:11:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:11.605 08:11:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:11.605 08:11:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:11.605 08:11:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:11.605 08:11:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:11.605 08:11:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:11.605 08:11:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:11.605 08:11:41 -- nvmf/common.sh@294 -- # net_devs=() 00:19:11.605 08:11:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:11.605 08:11:41 -- nvmf/common.sh@295 -- # e810=() 00:19:11.605 08:11:41 -- nvmf/common.sh@295 -- # local -ga e810 00:19:11.605 08:11:41 -- nvmf/common.sh@296 -- # x722=() 00:19:11.605 08:11:41 -- nvmf/common.sh@296 -- # local -ga x722 00:19:11.605 08:11:41 -- nvmf/common.sh@297 -- # mlx=() 00:19:11.605 08:11:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:11.605 08:11:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:11.605 08:11:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:11.605 08:11:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:11.605 08:11:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:11.605 08:11:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:11.605 08:11:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:11.606 08:11:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:11.606 08:11:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:11.606 08:11:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:11.606 08:11:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:11.606 08:11:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:11.606 08:11:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:11.606 08:11:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:11.606 08:11:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:11.606 08:11:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:11.606 08:11:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:11.606 08:11:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:11.606 08:11:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:11.606 08:11:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:11.606 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:11.606 08:11:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:11.606 08:11:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:11.606 08:11:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.606 08:11:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.606 08:11:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:11.606 08:11:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:11.606 08:11:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:11.606 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:11.606 08:11:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:11.606 08:11:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:11.606 08:11:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.606 08:11:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.606 08:11:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:11.606 08:11:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:11.606 08:11:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:11.606 08:11:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:11.606 08:11:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:11.606 08:11:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.606 08:11:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:11.606 08:11:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.606 08:11:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:11.606 Found net devices under 0000:31:00.0: cvl_0_0 00:19:11.606 08:11:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.606 08:11:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:11.606 08:11:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.606 08:11:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:11.606 08:11:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.606 08:11:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:11.606 Found net devices under 0000:31:00.1: cvl_0_1 00:19:11.606 08:11:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.606 08:11:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:11.606 08:11:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:11.606 08:11:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:11.606 08:11:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:11.606 08:11:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:11.606 08:11:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:11.606 08:11:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:11.606 08:11:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:11.606 08:11:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:11.606 08:11:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:11.606 08:11:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:11.606 08:11:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:11.606 08:11:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:11.606 08:11:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:11.606 08:11:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:11.606 08:11:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:11.606 08:11:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:11.606 08:11:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:11.606 08:11:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:11.606 08:11:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:11.606 08:11:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:11.606 08:11:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:11.606 08:11:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:11.606 08:11:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:11.606 08:11:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:11.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:11.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:19:11.606 00:19:11.606 --- 10.0.0.2 ping statistics --- 00:19:11.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.606 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:19:11.606 08:11:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:11.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:11.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:19:11.606 00:19:11.606 --- 10.0.0.1 ping statistics --- 00:19:11.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.606 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:19:11.606 08:11:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:11.606 08:11:42 -- nvmf/common.sh@410 -- # return 0 00:19:11.606 08:11:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:11.606 08:11:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:11.606 08:11:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:11.606 08:11:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:11.606 08:11:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:11.606 08:11:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:11.606 08:11:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:11.606 08:11:42 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:11.606 08:11:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:11.606 08:11:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:11.606 08:11:42 -- common/autotest_common.sh@10 -- # set +x 00:19:11.606 08:11:42 -- nvmf/common.sh@469 -- # nvmfpid=1067787 00:19:11.606 08:11:42 -- nvmf/common.sh@470 -- # waitforlisten 1067787 00:19:11.606 08:11:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:11.606 08:11:42 -- common/autotest_common.sh@819 -- # '[' -z 1067787 ']' 00:19:11.606 08:11:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.606 08:11:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:11.606 08:11:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.606 08:11:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:11.606 08:11:42 -- common/autotest_common.sh@10 -- # set +x 00:19:11.606 [2024-06-11 08:11:42.206952] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:11.606 [2024-06-11 08:11:42.207020] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:11.865 [2024-06-11 08:11:42.302805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:11.865 [2024-06-11 08:11:42.405466] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:11.865 [2024-06-11 08:11:42.405610] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.865 [2024-06-11 08:11:42.405619] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.865 [2024-06-11 08:11:42.405626] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.865 [2024-06-11 08:11:42.405792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:11.865 [2024-06-11 08:11:42.405957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:11.865 [2024-06-11 08:11:42.406117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:11.865 [2024-06-11 08:11:42.406116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:12.434 08:11:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:12.434 08:11:42 -- common/autotest_common.sh@852 -- # return 0 00:19:12.434 08:11:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:12.434 08:11:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:12.434 08:11:42 -- common/autotest_common.sh@10 -- # set +x 00:19:12.434 08:11:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.434 08:11:43 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:12.434 08:11:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:12.434 08:11:43 -- common/autotest_common.sh@10 -- # set +x 00:19:12.434 [2024-06-11 08:11:43.047444] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:12.434 08:11:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:12.434 08:11:43 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:12.434 08:11:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:12.434 08:11:43 -- common/autotest_common.sh@10 -- # set +x 00:19:12.434 Malloc0 00:19:12.434 08:11:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:12.434 08:11:43 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:12.434 08:11:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:12.434 08:11:43 -- common/autotest_common.sh@10 -- # set +x 00:19:12.694 08:11:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:12.694 08:11:43 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:12.694 08:11:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:12.694 08:11:43 -- common/autotest_common.sh@10 -- # set +x 00:19:12.694 08:11:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:12.694 08:11:43 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:12.694 08:11:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:12.694 08:11:43 -- common/autotest_common.sh@10 -- # set +x 00:19:12.694 [2024-06-11 08:11:43.101032] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.694 08:11:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:12.694 08:11:43 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:12.694 08:11:43 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:12.694 08:11:43 -- nvmf/common.sh@520 -- # config=() 00:19:12.694 08:11:43 -- nvmf/common.sh@520 -- # local subsystem config 00:19:12.694 08:11:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:12.694 08:11:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:12.694 { 00:19:12.694 "params": { 00:19:12.694 "name": "Nvme$subsystem", 00:19:12.694 "trtype": "$TEST_TRANSPORT", 00:19:12.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:12.694 "adrfam": "ipv4", 00:19:12.694 "trsvcid": "$NVMF_PORT", 00:19:12.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:12.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:12.694 "hdgst": ${hdgst:-false}, 00:19:12.694 "ddgst": ${ddgst:-false} 00:19:12.694 }, 00:19:12.694 "method": "bdev_nvme_attach_controller" 00:19:12.694 } 00:19:12.694 EOF 00:19:12.694 )") 00:19:12.694 08:11:43 -- nvmf/common.sh@542 -- # cat 00:19:12.694 08:11:43 -- nvmf/common.sh@544 -- # jq . 00:19:12.694 08:11:43 -- nvmf/common.sh@545 -- # IFS=, 00:19:12.694 08:11:43 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:12.694 "params": { 00:19:12.694 "name": "Nvme1", 00:19:12.694 "trtype": "tcp", 00:19:12.694 "traddr": "10.0.0.2", 00:19:12.694 "adrfam": "ipv4", 00:19:12.694 "trsvcid": "4420", 00:19:12.694 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.694 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:12.694 "hdgst": false, 00:19:12.694 "ddgst": false 00:19:12.694 }, 00:19:12.694 "method": "bdev_nvme_attach_controller" 00:19:12.694 }' 00:19:12.694 [2024-06-11 08:11:43.160336] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:12.694 [2024-06-11 08:11:43.160411] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1067950 ] 00:19:12.694 [2024-06-11 08:11:43.231019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:12.694 [2024-06-11 08:11:43.324325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.694 [2024-06-11 08:11:43.324472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.694 [2024-06-11 08:11:43.324502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.264 [2024-06-11 08:11:43.623408] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:13.264 [2024-06-11 08:11:43.623433] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:13.264 I/O targets: 00:19:13.264 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:13.264 00:19:13.264 00:19:13.264 CUnit - A unit testing framework for C - Version 2.1-3 00:19:13.264 http://cunit.sourceforge.net/ 00:19:13.264 00:19:13.264 00:19:13.264 Suite: bdevio tests on: Nvme1n1 00:19:13.264 Test: blockdev write read block ...passed 00:19:13.264 Test: blockdev write zeroes read block ...passed 00:19:13.264 Test: blockdev write zeroes read no split ...passed 00:19:13.264 Test: blockdev write zeroes read split ...passed 00:19:13.264 Test: blockdev write zeroes read split partial ...passed 00:19:13.264 Test: blockdev reset ...[2024-06-11 08:11:43.801797] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:13.264 [2024-06-11 08:11:43.801849] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb76480 (9): Bad file descriptor 00:19:13.264 [2024-06-11 08:11:43.860567] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:13.264 passed 00:19:13.264 Test: blockdev write read 8 blocks ...passed 00:19:13.264 Test: blockdev write read size > 128k ...passed 00:19:13.264 Test: blockdev write read invalid size ...passed 00:19:13.523 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:13.523 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:13.523 Test: blockdev write read max offset ...passed 00:19:13.523 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:13.523 Test: blockdev writev readv 8 blocks ...passed 00:19:13.523 Test: blockdev writev readv 30 x 1block ...passed 00:19:13.523 Test: blockdev writev readv block ...passed 00:19:13.523 Test: blockdev writev readv size > 128k ...passed 00:19:13.523 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:13.523 Test: blockdev comparev and writev ...[2024-06-11 08:11:44.128330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.523 [2024-06-11 08:11:44.128353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:13.523 [2024-06-11 08:11:44.128364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.523 [2024-06-11 08:11:44.128370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:13.523 [2024-06-11 08:11:44.128862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.523 [2024-06-11 08:11:44.128871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:13.523 [2024-06-11 08:11:44.128880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.523 [2024-06-11 08:11:44.128885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:13.523 [2024-06-11 08:11:44.129321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.523 [2024-06-11 08:11:44.129328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:13.523 [2024-06-11 08:11:44.129338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.523 [2024-06-11 08:11:44.129343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:13.523 [2024-06-11 08:11:44.129820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.523 [2024-06-11 08:11:44.129828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:13.523 [2024-06-11 08:11:44.129838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:13.523 [2024-06-11 08:11:44.129843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:13.781 passed 00:19:13.781 Test: blockdev nvme passthru rw ...passed 00:19:13.781 Test: blockdev nvme passthru vendor specific ...[2024-06-11 08:11:44.215315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:13.781 [2024-06-11 08:11:44.215325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:13.781 [2024-06-11 08:11:44.215649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:13.781 [2024-06-11 08:11:44.215656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:13.781 [2024-06-11 08:11:44.216006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:13.781 [2024-06-11 08:11:44.216013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:13.781 [2024-06-11 08:11:44.216348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:13.781 [2024-06-11 08:11:44.216355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:13.781 passed 00:19:13.781 Test: blockdev nvme admin passthru ...passed 00:19:13.781 Test: blockdev copy ...passed 00:19:13.781 00:19:13.781 Run Summary: Type Total Ran Passed Failed Inactive 00:19:13.781 suites 1 1 n/a 0 0 00:19:13.781 tests 23 23 23 0 0 00:19:13.781 asserts 152 152 152 0 n/a 00:19:13.781 00:19:13.781 Elapsed time = 1.313 seconds 00:19:14.040 08:11:44 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:14.040 08:11:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:14.040 08:11:44 -- common/autotest_common.sh@10 -- # set +x 00:19:14.040 08:11:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:14.040 08:11:44 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:14.040 08:11:44 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:14.040 08:11:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:14.040 08:11:44 -- nvmf/common.sh@116 -- # sync 00:19:14.040 08:11:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:14.040 08:11:44 -- nvmf/common.sh@119 -- # set +e 00:19:14.040 08:11:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:14.040 08:11:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:14.040 rmmod nvme_tcp 00:19:14.040 rmmod nvme_fabrics 00:19:14.040 rmmod nvme_keyring 00:19:14.040 08:11:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:14.040 08:11:44 -- nvmf/common.sh@123 -- # set -e 00:19:14.040 08:11:44 -- nvmf/common.sh@124 -- # return 0 00:19:14.040 08:11:44 -- nvmf/common.sh@477 -- # '[' -n 1067787 ']' 00:19:14.040 08:11:44 -- nvmf/common.sh@478 -- # killprocess 1067787 00:19:14.040 08:11:44 -- common/autotest_common.sh@926 -- # '[' -z 1067787 ']' 00:19:14.040 08:11:44 -- common/autotest_common.sh@930 -- # kill -0 1067787 00:19:14.040 08:11:44 -- common/autotest_common.sh@931 -- # uname 00:19:14.040 08:11:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:14.040 08:11:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1067787 00:19:14.040 08:11:44 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:14.041 08:11:44 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:14.041 08:11:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1067787' 00:19:14.041 killing process with pid 1067787 00:19:14.041 08:11:44 -- common/autotest_common.sh@945 -- # kill 1067787 00:19:14.041 08:11:44 -- common/autotest_common.sh@950 -- # wait 1067787 00:19:14.609 08:11:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:14.609 08:11:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:14.609 08:11:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:14.609 08:11:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:14.609 08:11:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:14.609 08:11:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.609 08:11:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:14.609 08:11:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.520 08:11:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:16.520 00:19:16.520 real 0m11.948s 00:19:16.520 user 0m14.402s 00:19:16.520 sys 0m6.177s 00:19:16.520 08:11:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:16.520 08:11:47 -- common/autotest_common.sh@10 -- # set +x 00:19:16.520 ************************************ 00:19:16.520 END TEST nvmf_bdevio_no_huge 00:19:16.520 ************************************ 00:19:16.520 08:11:47 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:16.520 08:11:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:16.520 08:11:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:16.520 08:11:47 -- common/autotest_common.sh@10 -- # set +x 00:19:16.520 ************************************ 00:19:16.520 START TEST nvmf_tls 00:19:16.520 ************************************ 00:19:16.520 08:11:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:16.520 * Looking for test storage... 00:19:16.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:16.780 08:11:47 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.780 08:11:47 -- nvmf/common.sh@7 -- # uname -s 00:19:16.780 08:11:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.780 08:11:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.780 08:11:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.780 08:11:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.780 08:11:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.780 08:11:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.780 08:11:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.780 08:11:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.780 08:11:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.780 08:11:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.780 08:11:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:16.780 08:11:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:16.780 08:11:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.780 08:11:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.780 08:11:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:16.780 08:11:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:16.780 08:11:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.780 08:11:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.780 08:11:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.780 08:11:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.780 08:11:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.780 08:11:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.780 08:11:47 -- paths/export.sh@5 -- # export PATH 00:19:16.780 08:11:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.780 08:11:47 -- nvmf/common.sh@46 -- # : 0 00:19:16.780 08:11:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:16.780 08:11:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:16.780 08:11:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:16.780 08:11:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.781 08:11:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.781 08:11:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:16.781 08:11:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:16.781 08:11:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:16.781 08:11:47 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:16.781 08:11:47 -- target/tls.sh@71 -- # nvmftestinit 00:19:16.781 08:11:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:16.781 08:11:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.781 08:11:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:16.781 08:11:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:16.781 08:11:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:16.781 08:11:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.781 08:11:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.781 08:11:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.781 08:11:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:16.781 08:11:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:16.781 08:11:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:16.781 08:11:47 -- common/autotest_common.sh@10 -- # set +x 00:19:23.359 08:11:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:23.360 08:11:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:23.360 08:11:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:23.360 08:11:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:23.360 08:11:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:23.360 08:11:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:23.360 08:11:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:23.360 08:11:53 -- nvmf/common.sh@294 -- # net_devs=() 00:19:23.360 08:11:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:23.360 08:11:53 -- nvmf/common.sh@295 -- # e810=() 00:19:23.360 08:11:53 -- nvmf/common.sh@295 -- # local -ga e810 00:19:23.360 08:11:53 -- nvmf/common.sh@296 -- # x722=() 00:19:23.360 08:11:53 -- nvmf/common.sh@296 -- # local -ga x722 00:19:23.360 08:11:53 -- nvmf/common.sh@297 -- # mlx=() 00:19:23.360 08:11:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:23.360 08:11:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.360 08:11:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.360 08:11:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.360 08:11:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.360 08:11:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.360 08:11:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.360 08:11:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.360 08:11:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.360 08:11:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.360 08:11:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.360 08:11:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.360 08:11:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:23.360 08:11:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:23.360 08:11:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:23.360 08:11:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:23.360 08:11:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:23.360 08:11:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:23.360 08:11:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:23.360 08:11:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:23.360 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:23.360 08:11:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:23.360 08:11:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:23.360 08:11:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.360 08:11:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.360 08:11:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:23.360 08:11:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:23.360 08:11:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:23.360 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:23.360 08:11:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:23.360 08:11:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:23.360 08:11:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.360 08:11:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.360 08:11:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:23.360 08:11:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:23.360 08:11:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:23.360 08:11:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:23.360 08:11:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:23.360 08:11:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.360 08:11:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:23.360 08:11:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.360 08:11:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:23.360 Found net devices under 0000:31:00.0: cvl_0_0 00:19:23.360 08:11:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.360 08:11:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:23.360 08:11:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.360 08:11:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:23.360 08:11:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.360 08:11:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:23.360 Found net devices under 0000:31:00.1: cvl_0_1 00:19:23.360 08:11:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.360 08:11:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:23.360 08:11:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:23.360 08:11:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:23.360 08:11:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:23.360 08:11:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:23.360 08:11:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.360 08:11:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:23.360 08:11:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:23.360 08:11:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:23.360 08:11:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:23.360 08:11:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:23.360 08:11:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:23.360 08:11:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:23.360 08:11:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.360 08:11:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:23.360 08:11:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:23.360 08:11:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:23.360 08:11:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:23.620 08:11:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:23.620 08:11:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:23.620 08:11:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:23.620 08:11:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:23.620 08:11:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:23.620 08:11:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:23.881 08:11:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:23.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:19:23.881 00:19:23.881 --- 10.0.0.2 ping statistics --- 00:19:23.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.881 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:19:23.881 08:11:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:23.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:19:23.881 00:19:23.881 --- 10.0.0.1 ping statistics --- 00:19:23.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.881 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:19:23.881 08:11:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.881 08:11:54 -- nvmf/common.sh@410 -- # return 0 00:19:23.881 08:11:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:23.881 08:11:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.881 08:11:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:23.881 08:11:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:23.881 08:11:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.881 08:11:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:23.881 08:11:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:23.881 08:11:54 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:23.881 08:11:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:23.881 08:11:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:23.881 08:11:54 -- common/autotest_common.sh@10 -- # set +x 00:19:23.881 08:11:54 -- nvmf/common.sh@469 -- # nvmfpid=1072595 00:19:23.881 08:11:54 -- nvmf/common.sh@470 -- # waitforlisten 1072595 00:19:23.881 08:11:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:23.881 08:11:54 -- common/autotest_common.sh@819 -- # '[' -z 1072595 ']' 00:19:23.881 08:11:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.881 08:11:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:23.881 08:11:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.881 08:11:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:23.881 08:11:54 -- common/autotest_common.sh@10 -- # set +x 00:19:23.881 [2024-06-11 08:11:54.369842] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:23.881 [2024-06-11 08:11:54.369890] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.881 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.881 [2024-06-11 08:11:54.454769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.142 [2024-06-11 08:11:54.533792] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:24.142 [2024-06-11 08:11:54.533940] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.142 [2024-06-11 08:11:54.533950] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.142 [2024-06-11 08:11:54.533957] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.142 [2024-06-11 08:11:54.533994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.714 08:11:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:24.714 08:11:55 -- common/autotest_common.sh@852 -- # return 0 00:19:24.714 08:11:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:24.714 08:11:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:24.714 08:11:55 -- common/autotest_common.sh@10 -- # set +x 00:19:24.714 08:11:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.714 08:11:55 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:19:24.714 08:11:55 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:24.714 true 00:19:24.714 08:11:55 -- target/tls.sh@82 -- # jq -r .tls_version 00:19:24.714 08:11:55 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:24.974 08:11:55 -- target/tls.sh@82 -- # version=0 00:19:24.974 08:11:55 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:19:24.974 08:11:55 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:25.235 08:11:55 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:25.235 08:11:55 -- target/tls.sh@90 -- # jq -r .tls_version 00:19:25.235 08:11:55 -- target/tls.sh@90 -- # version=13 00:19:25.235 08:11:55 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:19:25.235 08:11:55 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:25.495 08:11:55 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:25.495 08:11:55 -- target/tls.sh@98 -- # jq -r .tls_version 00:19:25.755 08:11:56 -- target/tls.sh@98 -- # version=7 00:19:25.755 08:11:56 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:19:25.755 08:11:56 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:25.755 08:11:56 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:25.755 08:11:56 -- target/tls.sh@105 -- # ktls=false 00:19:25.755 08:11:56 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:19:25.755 08:11:56 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:26.016 08:11:56 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:26.016 08:11:56 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:26.016 08:11:56 -- target/tls.sh@113 -- # ktls=true 00:19:26.016 08:11:56 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:19:26.016 08:11:56 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:26.277 08:11:56 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:26.277 08:11:56 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:19:26.538 08:11:56 -- target/tls.sh@121 -- # ktls=false 00:19:26.538 08:11:56 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:19:26.538 08:11:56 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:19:26.538 08:11:56 -- target/tls.sh@49 -- # local key hash crc 00:19:26.538 08:11:56 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:19:26.538 08:11:56 -- target/tls.sh@51 -- # hash=01 00:19:26.538 08:11:56 -- target/tls.sh@52 -- # gzip -1 -c 00:19:26.538 08:11:56 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:19:26.538 08:11:56 -- target/tls.sh@52 -- # tail -c8 00:19:26.538 08:11:56 -- target/tls.sh@52 -- # head -c 4 00:19:26.538 08:11:56 -- target/tls.sh@52 -- # crc='p$H�' 00:19:26.538 08:11:56 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:26.538 08:11:56 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:19:26.538 08:11:56 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:26.538 08:11:56 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:26.538 08:11:56 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:19:26.538 08:11:56 -- target/tls.sh@49 -- # local key hash crc 00:19:26.538 08:11:56 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:19:26.538 08:11:56 -- target/tls.sh@51 -- # hash=01 00:19:26.538 08:11:57 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:19:26.538 08:11:57 -- target/tls.sh@52 -- # gzip -1 -c 00:19:26.538 08:11:57 -- target/tls.sh@52 -- # tail -c8 00:19:26.538 08:11:57 -- target/tls.sh@52 -- # head -c 4 00:19:26.538 08:11:57 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:19:26.538 08:11:57 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:26.538 08:11:57 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:19:26.538 08:11:57 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:26.538 08:11:57 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:26.538 08:11:57 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:26.538 08:11:57 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:26.538 08:11:57 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:26.538 08:11:57 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:26.538 08:11:57 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:26.538 08:11:57 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:26.538 08:11:57 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:26.800 08:11:57 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:26.800 08:11:57 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:26.800 08:11:57 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:26.800 08:11:57 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:27.061 [2024-06-11 08:11:57.582329] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.061 08:11:57 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:27.322 08:11:57 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:27.322 [2024-06-11 08:11:57.911132] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:27.323 [2024-06-11 08:11:57.911312] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.323 08:11:57 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:27.584 malloc0 00:19:27.584 08:11:58 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:27.845 08:11:58 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:27.845 08:11:58 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:27.845 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.866 Initializing NVMe Controllers 00:19:37.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:37.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:37.866 Initialization complete. Launching workers. 00:19:37.866 ======================================================== 00:19:37.866 Latency(us) 00:19:37.866 Device Information : IOPS MiB/s Average min max 00:19:37.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19686.25 76.90 3250.98 1212.25 5068.82 00:19:37.866 ======================================================== 00:19:37.866 Total : 19686.25 76.90 3250.98 1212.25 5068.82 00:19:37.866 00:19:37.866 08:12:08 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:37.866 08:12:08 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:37.866 08:12:08 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:37.866 08:12:08 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:37.866 08:12:08 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:19:37.866 08:12:08 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:37.866 08:12:08 -- target/tls.sh@28 -- # bdevperf_pid=1075739 00:19:37.866 08:12:08 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:37.866 08:12:08 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:37.866 08:12:08 -- target/tls.sh@31 -- # waitforlisten 1075739 /var/tmp/bdevperf.sock 00:19:37.866 08:12:08 -- common/autotest_common.sh@819 -- # '[' -z 1075739 ']' 00:19:37.866 08:12:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.866 08:12:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:37.866 08:12:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.866 08:12:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:37.866 08:12:08 -- common/autotest_common.sh@10 -- # set +x 00:19:38.144 [2024-06-11 08:12:08.515127] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:38.144 [2024-06-11 08:12:08.515184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1075739 ] 00:19:38.144 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.144 [2024-06-11 08:12:08.566013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.144 [2024-06-11 08:12:08.617085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:38.723 08:12:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:38.723 08:12:09 -- common/autotest_common.sh@852 -- # return 0 00:19:38.723 08:12:09 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:38.983 [2024-06-11 08:12:09.422357] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:38.983 TLSTESTn1 00:19:38.983 08:12:09 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:38.983 Running I/O for 10 seconds... 00:19:51.202 00:19:51.202 Latency(us) 00:19:51.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.202 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:51.202 Verification LBA range: start 0x0 length 0x2000 00:19:51.202 TLSTESTn1 : 10.01 6402.83 25.01 0.00 0.00 19973.64 3467.95 51554.99 00:19:51.202 =================================================================================================================== 00:19:51.202 Total : 6402.83 25.01 0.00 0.00 19973.64 3467.95 51554.99 00:19:51.202 0 00:19:51.202 08:12:19 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:51.202 08:12:19 -- target/tls.sh@45 -- # killprocess 1075739 00:19:51.202 08:12:19 -- common/autotest_common.sh@926 -- # '[' -z 1075739 ']' 00:19:51.202 08:12:19 -- common/autotest_common.sh@930 -- # kill -0 1075739 00:19:51.202 08:12:19 -- common/autotest_common.sh@931 -- # uname 00:19:51.202 08:12:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:51.202 08:12:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1075739 00:19:51.202 08:12:19 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:51.202 08:12:19 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:51.202 08:12:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1075739' 00:19:51.202 killing process with pid 1075739 00:19:51.202 08:12:19 -- common/autotest_common.sh@945 -- # kill 1075739 00:19:51.202 Received shutdown signal, test time was about 10.000000 seconds 00:19:51.202 00:19:51.202 Latency(us) 00:19:51.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.202 =================================================================================================================== 00:19:51.202 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:51.202 08:12:19 -- common/autotest_common.sh@950 -- # wait 1075739 00:19:51.202 08:12:19 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:51.202 08:12:19 -- common/autotest_common.sh@640 -- # local es=0 00:19:51.202 08:12:19 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:51.202 08:12:19 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:19:51.202 08:12:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:51.202 08:12:19 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:19:51.202 08:12:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:51.202 08:12:19 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:51.202 08:12:19 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:51.202 08:12:19 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:51.202 08:12:19 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:51.202 08:12:19 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:19:51.202 08:12:19 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:51.202 08:12:19 -- target/tls.sh@28 -- # bdevperf_pid=1078106 00:19:51.202 08:12:19 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:51.202 08:12:19 -- target/tls.sh@31 -- # waitforlisten 1078106 /var/tmp/bdevperf.sock 00:19:51.202 08:12:19 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:51.202 08:12:19 -- common/autotest_common.sh@819 -- # '[' -z 1078106 ']' 00:19:51.202 08:12:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.202 08:12:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:51.202 08:12:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.202 08:12:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:51.202 08:12:19 -- common/autotest_common.sh@10 -- # set +x 00:19:51.202 [2024-06-11 08:12:19.873723] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:51.202 [2024-06-11 08:12:19.873776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1078106 ] 00:19:51.202 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.202 [2024-06-11 08:12:19.924287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.202 [2024-06-11 08:12:19.974196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.202 08:12:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:51.202 08:12:20 -- common/autotest_common.sh@852 -- # return 0 00:19:51.202 08:12:20 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:51.202 [2024-06-11 08:12:20.768288] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.202 [2024-06-11 08:12:20.779620] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:51.202 [2024-06-11 08:12:20.780154] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2448a00 (107): Transport endpoint is not connected 00:19:51.202 [2024-06-11 08:12:20.781149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2448a00 (9): Bad file descriptor 00:19:51.202 [2024-06-11 08:12:20.782151] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:51.202 [2024-06-11 08:12:20.782157] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:51.202 [2024-06-11 08:12:20.782163] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:51.202 request: 00:19:51.202 { 00:19:51.202 "name": "TLSTEST", 00:19:51.202 "trtype": "tcp", 00:19:51.202 "traddr": "10.0.0.2", 00:19:51.202 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.202 "adrfam": "ipv4", 00:19:51.202 "trsvcid": "4420", 00:19:51.202 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.202 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:19:51.202 "method": "bdev_nvme_attach_controller", 00:19:51.202 "req_id": 1 00:19:51.202 } 00:19:51.202 Got JSON-RPC error response 00:19:51.202 response: 00:19:51.202 { 00:19:51.202 "code": -32602, 00:19:51.202 "message": "Invalid parameters" 00:19:51.202 } 00:19:51.202 08:12:20 -- target/tls.sh@36 -- # killprocess 1078106 00:19:51.202 08:12:20 -- common/autotest_common.sh@926 -- # '[' -z 1078106 ']' 00:19:51.202 08:12:20 -- common/autotest_common.sh@930 -- # kill -0 1078106 00:19:51.202 08:12:20 -- common/autotest_common.sh@931 -- # uname 00:19:51.202 08:12:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:51.202 08:12:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1078106 00:19:51.202 08:12:20 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:51.202 08:12:20 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:51.203 08:12:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1078106' 00:19:51.203 killing process with pid 1078106 00:19:51.203 08:12:20 -- common/autotest_common.sh@945 -- # kill 1078106 00:19:51.203 Received shutdown signal, test time was about 10.000000 seconds 00:19:51.203 00:19:51.203 Latency(us) 00:19:51.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.203 =================================================================================================================== 00:19:51.203 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:51.203 08:12:20 -- common/autotest_common.sh@950 -- # wait 1078106 00:19:51.203 08:12:20 -- target/tls.sh@37 -- # return 1 00:19:51.203 08:12:20 -- common/autotest_common.sh@643 -- # es=1 00:19:51.203 08:12:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:51.203 08:12:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:51.203 08:12:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:51.203 08:12:20 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:51.203 08:12:20 -- common/autotest_common.sh@640 -- # local es=0 00:19:51.203 08:12:20 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:51.203 08:12:20 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:19:51.203 08:12:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:51.203 08:12:20 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:19:51.203 08:12:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:51.203 08:12:20 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:51.203 08:12:20 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:51.203 08:12:20 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:51.203 08:12:20 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:51.203 08:12:20 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:19:51.203 08:12:20 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:51.203 08:12:20 -- target/tls.sh@28 -- # bdevperf_pid=1078410 00:19:51.203 08:12:20 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:51.203 08:12:20 -- target/tls.sh@31 -- # waitforlisten 1078410 /var/tmp/bdevperf.sock 00:19:51.203 08:12:20 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:51.203 08:12:20 -- common/autotest_common.sh@819 -- # '[' -z 1078410 ']' 00:19:51.203 08:12:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.203 08:12:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:51.203 08:12:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.203 08:12:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:51.203 08:12:20 -- common/autotest_common.sh@10 -- # set +x 00:19:51.203 [2024-06-11 08:12:21.019515] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:51.203 [2024-06-11 08:12:21.019569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1078410 ] 00:19:51.203 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.203 [2024-06-11 08:12:21.069979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.203 [2024-06-11 08:12:21.119269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.203 08:12:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:51.203 08:12:21 -- common/autotest_common.sh@852 -- # return 0 00:19:51.203 08:12:21 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:51.463 [2024-06-11 08:12:21.924184] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.463 [2024-06-11 08:12:21.932331] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:51.463 [2024-06-11 08:12:21.932351] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:51.463 [2024-06-11 08:12:21.932370] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:51.463 [2024-06-11 08:12:21.933171] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xafea00 (107): Transport endpoint is not connected 00:19:51.464 [2024-06-11 08:12:21.934165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xafea00 (9): Bad file descriptor 00:19:51.464 [2024-06-11 08:12:21.935167] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:51.464 [2024-06-11 08:12:21.935174] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:51.464 [2024-06-11 08:12:21.935180] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:51.464 request: 00:19:51.464 { 00:19:51.464 "name": "TLSTEST", 00:19:51.464 "trtype": "tcp", 00:19:51.464 "traddr": "10.0.0.2", 00:19:51.464 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:51.464 "adrfam": "ipv4", 00:19:51.464 "trsvcid": "4420", 00:19:51.464 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.464 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:19:51.464 "method": "bdev_nvme_attach_controller", 00:19:51.464 "req_id": 1 00:19:51.464 } 00:19:51.464 Got JSON-RPC error response 00:19:51.464 response: 00:19:51.464 { 00:19:51.464 "code": -32602, 00:19:51.464 "message": "Invalid parameters" 00:19:51.464 } 00:19:51.464 08:12:21 -- target/tls.sh@36 -- # killprocess 1078410 00:19:51.464 08:12:21 -- common/autotest_common.sh@926 -- # '[' -z 1078410 ']' 00:19:51.464 08:12:21 -- common/autotest_common.sh@930 -- # kill -0 1078410 00:19:51.464 08:12:21 -- common/autotest_common.sh@931 -- # uname 00:19:51.464 08:12:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:51.464 08:12:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1078410 00:19:51.464 08:12:22 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:51.464 08:12:22 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:51.464 08:12:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1078410' 00:19:51.464 killing process with pid 1078410 00:19:51.464 08:12:22 -- common/autotest_common.sh@945 -- # kill 1078410 00:19:51.464 Received shutdown signal, test time was about 10.000000 seconds 00:19:51.464 00:19:51.464 Latency(us) 00:19:51.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.464 =================================================================================================================== 00:19:51.464 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:51.464 08:12:22 -- common/autotest_common.sh@950 -- # wait 1078410 00:19:51.724 08:12:22 -- target/tls.sh@37 -- # return 1 00:19:51.724 08:12:22 -- common/autotest_common.sh@643 -- # es=1 00:19:51.724 08:12:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:51.724 08:12:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:51.724 08:12:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:51.724 08:12:22 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:51.724 08:12:22 -- common/autotest_common.sh@640 -- # local es=0 00:19:51.724 08:12:22 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:51.724 08:12:22 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:19:51.724 08:12:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:51.724 08:12:22 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:19:51.724 08:12:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:51.724 08:12:22 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:51.724 08:12:22 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:51.724 08:12:22 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:51.724 08:12:22 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:51.724 08:12:22 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:19:51.724 08:12:22 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:51.724 08:12:22 -- target/tls.sh@28 -- # bdevperf_pid=1078750 00:19:51.724 08:12:22 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:51.724 08:12:22 -- target/tls.sh@31 -- # waitforlisten 1078750 /var/tmp/bdevperf.sock 00:19:51.724 08:12:22 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:51.724 08:12:22 -- common/autotest_common.sh@819 -- # '[' -z 1078750 ']' 00:19:51.724 08:12:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.724 08:12:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:51.724 08:12:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.724 08:12:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:51.724 08:12:22 -- common/autotest_common.sh@10 -- # set +x 00:19:51.724 [2024-06-11 08:12:22.172559] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:51.724 [2024-06-11 08:12:22.172612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1078750 ] 00:19:51.724 EAL: No free 2048 kB hugepages reported on node 1 00:19:51.724 [2024-06-11 08:12:22.223081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.724 [2024-06-11 08:12:22.272231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.295 08:12:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:52.295 08:12:22 -- common/autotest_common.sh@852 -- # return 0 00:19:52.295 08:12:22 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:52.555 [2024-06-11 08:12:23.077178] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:52.555 [2024-06-11 08:12:23.085125] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:52.555 [2024-06-11 08:12:23.085143] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:52.555 [2024-06-11 08:12:23.085163] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:52.555 [2024-06-11 08:12:23.085204] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131aa00 (107): Transport endpoint is not connected 00:19:52.555 [2024-06-11 08:12:23.086189] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131aa00 (9): Bad file descriptor 00:19:52.555 [2024-06-11 08:12:23.087191] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:52.555 [2024-06-11 08:12:23.087198] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:52.555 [2024-06-11 08:12:23.087205] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:52.555 request: 00:19:52.555 { 00:19:52.555 "name": "TLSTEST", 00:19:52.555 "trtype": "tcp", 00:19:52.555 "traddr": "10.0.0.2", 00:19:52.555 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:52.555 "adrfam": "ipv4", 00:19:52.555 "trsvcid": "4420", 00:19:52.555 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:52.555 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:19:52.555 "method": "bdev_nvme_attach_controller", 00:19:52.555 "req_id": 1 00:19:52.555 } 00:19:52.555 Got JSON-RPC error response 00:19:52.555 response: 00:19:52.555 { 00:19:52.555 "code": -32602, 00:19:52.555 "message": "Invalid parameters" 00:19:52.555 } 00:19:52.555 08:12:23 -- target/tls.sh@36 -- # killprocess 1078750 00:19:52.555 08:12:23 -- common/autotest_common.sh@926 -- # '[' -z 1078750 ']' 00:19:52.555 08:12:23 -- common/autotest_common.sh@930 -- # kill -0 1078750 00:19:52.555 08:12:23 -- common/autotest_common.sh@931 -- # uname 00:19:52.555 08:12:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:52.555 08:12:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1078750 00:19:52.555 08:12:23 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:52.555 08:12:23 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:52.555 08:12:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1078750' 00:19:52.555 killing process with pid 1078750 00:19:52.555 08:12:23 -- common/autotest_common.sh@945 -- # kill 1078750 00:19:52.555 Received shutdown signal, test time was about 10.000000 seconds 00:19:52.555 00:19:52.555 Latency(us) 00:19:52.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.555 =================================================================================================================== 00:19:52.555 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:52.555 08:12:23 -- common/autotest_common.sh@950 -- # wait 1078750 00:19:52.815 08:12:23 -- target/tls.sh@37 -- # return 1 00:19:52.815 08:12:23 -- common/autotest_common.sh@643 -- # es=1 00:19:52.815 08:12:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:52.815 08:12:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:52.815 08:12:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:52.815 08:12:23 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:52.815 08:12:23 -- common/autotest_common.sh@640 -- # local es=0 00:19:52.815 08:12:23 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:52.815 08:12:23 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:19:52.815 08:12:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:52.815 08:12:23 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:19:52.815 08:12:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:52.815 08:12:23 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:52.815 08:12:23 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:52.816 08:12:23 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:52.816 08:12:23 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:52.816 08:12:23 -- target/tls.sh@23 -- # psk= 00:19:52.816 08:12:23 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:52.816 08:12:23 -- target/tls.sh@28 -- # bdevperf_pid=1078823 00:19:52.816 08:12:23 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:52.816 08:12:23 -- target/tls.sh@31 -- # waitforlisten 1078823 /var/tmp/bdevperf.sock 00:19:52.816 08:12:23 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:52.816 08:12:23 -- common/autotest_common.sh@819 -- # '[' -z 1078823 ']' 00:19:52.816 08:12:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:52.816 08:12:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:52.816 08:12:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:52.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:52.816 08:12:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:52.816 08:12:23 -- common/autotest_common.sh@10 -- # set +x 00:19:52.816 [2024-06-11 08:12:23.320530] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:52.816 [2024-06-11 08:12:23.320588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1078823 ] 00:19:52.816 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.816 [2024-06-11 08:12:23.371162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.816 [2024-06-11 08:12:23.421506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.757 08:12:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:53.757 08:12:24 -- common/autotest_common.sh@852 -- # return 0 00:19:53.757 08:12:24 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:53.757 [2024-06-11 08:12:24.227934] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:53.757 [2024-06-11 08:12:24.229142] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe4a340 (9): Bad file descriptor 00:19:53.757 [2024-06-11 08:12:24.230141] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:53.757 [2024-06-11 08:12:24.230147] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:53.757 [2024-06-11 08:12:24.230154] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:53.757 request: 00:19:53.757 { 00:19:53.757 "name": "TLSTEST", 00:19:53.757 "trtype": "tcp", 00:19:53.757 "traddr": "10.0.0.2", 00:19:53.757 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:53.757 "adrfam": "ipv4", 00:19:53.757 "trsvcid": "4420", 00:19:53.757 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.757 "method": "bdev_nvme_attach_controller", 00:19:53.757 "req_id": 1 00:19:53.757 } 00:19:53.757 Got JSON-RPC error response 00:19:53.757 response: 00:19:53.757 { 00:19:53.757 "code": -32602, 00:19:53.757 "message": "Invalid parameters" 00:19:53.757 } 00:19:53.757 08:12:24 -- target/tls.sh@36 -- # killprocess 1078823 00:19:53.757 08:12:24 -- common/autotest_common.sh@926 -- # '[' -z 1078823 ']' 00:19:53.757 08:12:24 -- common/autotest_common.sh@930 -- # kill -0 1078823 00:19:53.757 08:12:24 -- common/autotest_common.sh@931 -- # uname 00:19:53.757 08:12:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:53.757 08:12:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1078823 00:19:53.757 08:12:24 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:53.757 08:12:24 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:53.757 08:12:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1078823' 00:19:53.757 killing process with pid 1078823 00:19:53.757 08:12:24 -- common/autotest_common.sh@945 -- # kill 1078823 00:19:53.757 Received shutdown signal, test time was about 10.000000 seconds 00:19:53.757 00:19:53.757 Latency(us) 00:19:53.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.757 =================================================================================================================== 00:19:53.757 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:53.757 08:12:24 -- common/autotest_common.sh@950 -- # wait 1078823 00:19:54.017 08:12:24 -- target/tls.sh@37 -- # return 1 00:19:54.017 08:12:24 -- common/autotest_common.sh@643 -- # es=1 00:19:54.017 08:12:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:54.017 08:12:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:54.017 08:12:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:54.017 08:12:24 -- target/tls.sh@167 -- # killprocess 1072595 00:19:54.017 08:12:24 -- common/autotest_common.sh@926 -- # '[' -z 1072595 ']' 00:19:54.017 08:12:24 -- common/autotest_common.sh@930 -- # kill -0 1072595 00:19:54.017 08:12:24 -- common/autotest_common.sh@931 -- # uname 00:19:54.017 08:12:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:54.017 08:12:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1072595 00:19:54.017 08:12:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:54.017 08:12:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:54.017 08:12:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1072595' 00:19:54.017 killing process with pid 1072595 00:19:54.017 08:12:24 -- common/autotest_common.sh@945 -- # kill 1072595 00:19:54.017 08:12:24 -- common/autotest_common.sh@950 -- # wait 1072595 00:19:54.017 08:12:24 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:19:54.017 08:12:24 -- target/tls.sh@49 -- # local key hash crc 00:19:54.017 08:12:24 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:54.017 08:12:24 -- target/tls.sh@51 -- # hash=02 00:19:54.017 08:12:24 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:19:54.017 08:12:24 -- target/tls.sh@52 -- # tail -c8 00:19:54.017 08:12:24 -- target/tls.sh@52 -- # gzip -1 -c 00:19:54.017 08:12:24 -- target/tls.sh@52 -- # head -c 4 00:19:54.017 08:12:24 -- target/tls.sh@52 -- # crc='�e�'\''' 00:19:54.017 08:12:24 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:54.017 08:12:24 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:19:54.017 08:12:24 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:54.018 08:12:24 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:54.018 08:12:24 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:54.018 08:12:24 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:54.018 08:12:24 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:54.018 08:12:24 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:19:54.018 08:12:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:54.018 08:12:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:54.018 08:12:24 -- common/autotest_common.sh@10 -- # set +x 00:19:54.018 08:12:24 -- nvmf/common.sh@469 -- # nvmfpid=1079135 00:19:54.018 08:12:24 -- nvmf/common.sh@470 -- # waitforlisten 1079135 00:19:54.018 08:12:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:54.018 08:12:24 -- common/autotest_common.sh@819 -- # '[' -z 1079135 ']' 00:19:54.018 08:12:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.018 08:12:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:54.018 08:12:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.018 08:12:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:54.018 08:12:24 -- common/autotest_common.sh@10 -- # set +x 00:19:54.278 [2024-06-11 08:12:24.675938] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:54.278 [2024-06-11 08:12:24.676020] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.278 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.278 [2024-06-11 08:12:24.760070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.278 [2024-06-11 08:12:24.809942] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:54.278 [2024-06-11 08:12:24.810039] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.278 [2024-06-11 08:12:24.810045] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.278 [2024-06-11 08:12:24.810050] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.278 [2024-06-11 08:12:24.810064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.847 08:12:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:54.847 08:12:25 -- common/autotest_common.sh@852 -- # return 0 00:19:54.847 08:12:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:54.847 08:12:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:54.847 08:12:25 -- common/autotest_common.sh@10 -- # set +x 00:19:54.847 08:12:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.847 08:12:25 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:54.847 08:12:25 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:54.847 08:12:25 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:55.106 [2024-06-11 08:12:25.596110] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.106 08:12:25 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:55.106 08:12:25 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:55.366 [2024-06-11 08:12:25.880799] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:55.366 [2024-06-11 08:12:25.880974] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.366 08:12:25 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:55.626 malloc0 00:19:55.626 08:12:26 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:55.626 08:12:26 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:55.886 08:12:26 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:55.886 08:12:26 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:55.886 08:12:26 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:55.886 08:12:26 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:55.886 08:12:26 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:19:55.886 08:12:26 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:55.886 08:12:26 -- target/tls.sh@28 -- # bdevperf_pid=1079496 00:19:55.886 08:12:26 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:55.886 08:12:26 -- target/tls.sh@31 -- # waitforlisten 1079496 /var/tmp/bdevperf.sock 00:19:55.886 08:12:26 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:55.886 08:12:26 -- common/autotest_common.sh@819 -- # '[' -z 1079496 ']' 00:19:55.886 08:12:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.886 08:12:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:55.886 08:12:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.886 08:12:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:55.886 08:12:26 -- common/autotest_common.sh@10 -- # set +x 00:19:55.886 [2024-06-11 08:12:26.387897] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:55.886 [2024-06-11 08:12:26.387945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1079496 ] 00:19:55.886 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.887 [2024-06-11 08:12:26.437448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.887 [2024-06-11 08:12:26.488026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.827 08:12:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:56.827 08:12:27 -- common/autotest_common.sh@852 -- # return 0 00:19:56.827 08:12:27 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:56.827 [2024-06-11 08:12:27.280818] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.827 TLSTESTn1 00:19:56.827 08:12:27 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:56.827 Running I/O for 10 seconds... 00:20:09.054 00:20:09.054 Latency(us) 00:20:09.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.054 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:09.054 Verification LBA range: start 0x0 length 0x2000 00:20:09.054 TLSTESTn1 : 10.03 7843.93 30.64 0.00 0.00 16293.49 3659.09 38229.33 00:20:09.054 =================================================================================================================== 00:20:09.054 Total : 7843.93 30.64 0.00 0.00 16293.49 3659.09 38229.33 00:20:09.054 0 00:20:09.054 08:12:37 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:09.054 08:12:37 -- target/tls.sh@45 -- # killprocess 1079496 00:20:09.054 08:12:37 -- common/autotest_common.sh@926 -- # '[' -z 1079496 ']' 00:20:09.054 08:12:37 -- common/autotest_common.sh@930 -- # kill -0 1079496 00:20:09.054 08:12:37 -- common/autotest_common.sh@931 -- # uname 00:20:09.054 08:12:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:09.054 08:12:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1079496 00:20:09.054 08:12:37 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:09.054 08:12:37 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:09.054 08:12:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1079496' 00:20:09.054 killing process with pid 1079496 00:20:09.054 08:12:37 -- common/autotest_common.sh@945 -- # kill 1079496 00:20:09.054 Received shutdown signal, test time was about 10.000000 seconds 00:20:09.054 00:20:09.054 Latency(us) 00:20:09.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.054 =================================================================================================================== 00:20:09.054 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.054 08:12:37 -- common/autotest_common.sh@950 -- # wait 1079496 00:20:09.054 08:12:37 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:09.054 08:12:37 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:09.054 08:12:37 -- common/autotest_common.sh@640 -- # local es=0 00:20:09.054 08:12:37 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:09.054 08:12:37 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:09.054 08:12:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:09.054 08:12:37 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:09.054 08:12:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:09.054 08:12:37 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:09.054 08:12:37 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:09.054 08:12:37 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:09.054 08:12:37 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:09.054 08:12:37 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:09.054 08:12:37 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:09.054 08:12:37 -- target/tls.sh@28 -- # bdevperf_pid=1081863 00:20:09.054 08:12:37 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:09.054 08:12:37 -- target/tls.sh@31 -- # waitforlisten 1081863 /var/tmp/bdevperf.sock 00:20:09.054 08:12:37 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:09.054 08:12:37 -- common/autotest_common.sh@819 -- # '[' -z 1081863 ']' 00:20:09.054 08:12:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.054 08:12:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:09.054 08:12:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.054 08:12:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:09.054 08:12:37 -- common/autotest_common.sh@10 -- # set +x 00:20:09.054 [2024-06-11 08:12:37.752420] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:09.054 [2024-06-11 08:12:37.752478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1081863 ] 00:20:09.054 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.054 [2024-06-11 08:12:37.801880] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.054 [2024-06-11 08:12:37.851459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.054 08:12:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:09.054 08:12:38 -- common/autotest_common.sh@852 -- # return 0 00:20:09.054 08:12:38 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:09.054 [2024-06-11 08:12:38.628216] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:09.054 [2024-06-11 08:12:38.628241] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:09.054 request: 00:20:09.054 { 00:20:09.054 "name": "TLSTEST", 00:20:09.054 "trtype": "tcp", 00:20:09.054 "traddr": "10.0.0.2", 00:20:09.054 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:09.054 "adrfam": "ipv4", 00:20:09.054 "trsvcid": "4420", 00:20:09.054 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.054 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:09.054 "method": "bdev_nvme_attach_controller", 00:20:09.054 "req_id": 1 00:20:09.054 } 00:20:09.054 Got JSON-RPC error response 00:20:09.054 response: 00:20:09.054 { 00:20:09.054 "code": -22, 00:20:09.054 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:09.054 } 00:20:09.054 08:12:38 -- target/tls.sh@36 -- # killprocess 1081863 00:20:09.054 08:12:38 -- common/autotest_common.sh@926 -- # '[' -z 1081863 ']' 00:20:09.054 08:12:38 -- common/autotest_common.sh@930 -- # kill -0 1081863 00:20:09.054 08:12:38 -- common/autotest_common.sh@931 -- # uname 00:20:09.054 08:12:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:09.054 08:12:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1081863 00:20:09.054 08:12:38 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:09.054 08:12:38 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:09.054 08:12:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1081863' 00:20:09.054 killing process with pid 1081863 00:20:09.054 08:12:38 -- common/autotest_common.sh@945 -- # kill 1081863 00:20:09.054 Received shutdown signal, test time was about 10.000000 seconds 00:20:09.054 00:20:09.054 Latency(us) 00:20:09.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.054 =================================================================================================================== 00:20:09.054 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:09.054 08:12:38 -- common/autotest_common.sh@950 -- # wait 1081863 00:20:09.054 08:12:38 -- target/tls.sh@37 -- # return 1 00:20:09.054 08:12:38 -- common/autotest_common.sh@643 -- # es=1 00:20:09.054 08:12:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:09.054 08:12:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:09.054 08:12:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:09.054 08:12:38 -- target/tls.sh@183 -- # killprocess 1079135 00:20:09.054 08:12:38 -- common/autotest_common.sh@926 -- # '[' -z 1079135 ']' 00:20:09.054 08:12:38 -- common/autotest_common.sh@930 -- # kill -0 1079135 00:20:09.055 08:12:38 -- common/autotest_common.sh@931 -- # uname 00:20:09.055 08:12:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:09.055 08:12:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1079135 00:20:09.055 08:12:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:09.055 08:12:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:09.055 08:12:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1079135' 00:20:09.055 killing process with pid 1079135 00:20:09.055 08:12:38 -- common/autotest_common.sh@945 -- # kill 1079135 00:20:09.055 08:12:38 -- common/autotest_common.sh@950 -- # wait 1079135 00:20:09.055 08:12:38 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:09.055 08:12:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:09.055 08:12:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:09.055 08:12:38 -- common/autotest_common.sh@10 -- # set +x 00:20:09.055 08:12:38 -- nvmf/common.sh@469 -- # nvmfpid=1082032 00:20:09.055 08:12:38 -- nvmf/common.sh@470 -- # waitforlisten 1082032 00:20:09.055 08:12:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:09.055 08:12:38 -- common/autotest_common.sh@819 -- # '[' -z 1082032 ']' 00:20:09.055 08:12:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.055 08:12:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:09.055 08:12:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.055 08:12:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:09.055 08:12:38 -- common/autotest_common.sh@10 -- # set +x 00:20:09.055 [2024-06-11 08:12:39.030399] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:09.055 [2024-06-11 08:12:39.030459] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.055 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.055 [2024-06-11 08:12:39.111681] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.055 [2024-06-11 08:12:39.163955] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:09.055 [2024-06-11 08:12:39.164051] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.055 [2024-06-11 08:12:39.164057] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.055 [2024-06-11 08:12:39.164061] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.055 [2024-06-11 08:12:39.164075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.316 08:12:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:09.316 08:12:39 -- common/autotest_common.sh@852 -- # return 0 00:20:09.316 08:12:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:09.316 08:12:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:09.316 08:12:39 -- common/autotest_common.sh@10 -- # set +x 00:20:09.316 08:12:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.316 08:12:39 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:09.316 08:12:39 -- common/autotest_common.sh@640 -- # local es=0 00:20:09.316 08:12:39 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:09.316 08:12:39 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:20:09.316 08:12:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:09.316 08:12:39 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:20:09.316 08:12:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:09.316 08:12:39 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:09.316 08:12:39 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:09.316 08:12:39 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:09.577 [2024-06-11 08:12:39.966227] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.577 08:12:39 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:09.577 08:12:40 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:09.838 [2024-06-11 08:12:40.255180] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:09.838 [2024-06-11 08:12:40.255361] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.838 08:12:40 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:09.838 malloc0 00:20:09.838 08:12:40 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:10.098 08:12:40 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:10.098 [2024-06-11 08:12:40.669918] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:10.098 [2024-06-11 08:12:40.669935] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:10.098 [2024-06-11 08:12:40.669948] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:20:10.098 request: 00:20:10.098 { 00:20:10.098 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.098 "host": "nqn.2016-06.io.spdk:host1", 00:20:10.098 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:10.098 "method": "nvmf_subsystem_add_host", 00:20:10.098 "req_id": 1 00:20:10.098 } 00:20:10.098 Got JSON-RPC error response 00:20:10.098 response: 00:20:10.098 { 00:20:10.098 "code": -32603, 00:20:10.098 "message": "Internal error" 00:20:10.098 } 00:20:10.098 08:12:40 -- common/autotest_common.sh@643 -- # es=1 00:20:10.098 08:12:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:10.098 08:12:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:10.098 08:12:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:10.098 08:12:40 -- target/tls.sh@189 -- # killprocess 1082032 00:20:10.099 08:12:40 -- common/autotest_common.sh@926 -- # '[' -z 1082032 ']' 00:20:10.099 08:12:40 -- common/autotest_common.sh@930 -- # kill -0 1082032 00:20:10.099 08:12:40 -- common/autotest_common.sh@931 -- # uname 00:20:10.099 08:12:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:10.099 08:12:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1082032 00:20:10.099 08:12:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:10.099 08:12:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:10.099 08:12:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1082032' 00:20:10.099 killing process with pid 1082032 00:20:10.099 08:12:40 -- common/autotest_common.sh@945 -- # kill 1082032 00:20:10.099 08:12:40 -- common/autotest_common.sh@950 -- # wait 1082032 00:20:10.360 08:12:40 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:10.360 08:12:40 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:20:10.360 08:12:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:10.360 08:12:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:10.360 08:12:40 -- common/autotest_common.sh@10 -- # set +x 00:20:10.360 08:12:40 -- nvmf/common.sh@469 -- # nvmfpid=1082480 00:20:10.360 08:12:40 -- nvmf/common.sh@470 -- # waitforlisten 1082480 00:20:10.360 08:12:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:10.360 08:12:40 -- common/autotest_common.sh@819 -- # '[' -z 1082480 ']' 00:20:10.360 08:12:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.360 08:12:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:10.360 08:12:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.360 08:12:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:10.360 08:12:40 -- common/autotest_common.sh@10 -- # set +x 00:20:10.360 [2024-06-11 08:12:40.926300] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:10.360 [2024-06-11 08:12:40.926363] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.360 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.620 [2024-06-11 08:12:41.007624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.620 [2024-06-11 08:12:41.060674] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:10.620 [2024-06-11 08:12:41.060765] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.620 [2024-06-11 08:12:41.060771] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.620 [2024-06-11 08:12:41.060779] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.620 [2024-06-11 08:12:41.060796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.191 08:12:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:11.191 08:12:41 -- common/autotest_common.sh@852 -- # return 0 00:20:11.191 08:12:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:11.191 08:12:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:11.191 08:12:41 -- common/autotest_common.sh@10 -- # set +x 00:20:11.191 08:12:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.191 08:12:41 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:11.191 08:12:41 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:11.191 08:12:41 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:11.451 [2024-06-11 08:12:41.838818] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.451 08:12:41 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:11.451 08:12:41 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:11.710 [2024-06-11 08:12:42.123520] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:11.710 [2024-06-11 08:12:42.123677] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.710 08:12:42 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:11.710 malloc0 00:20:11.710 08:12:42 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:11.970 08:12:42 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:11.970 08:12:42 -- target/tls.sh@197 -- # bdevperf_pid=1082848 00:20:11.970 08:12:42 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:11.970 08:12:42 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:11.970 08:12:42 -- target/tls.sh@200 -- # waitforlisten 1082848 /var/tmp/bdevperf.sock 00:20:11.970 08:12:42 -- common/autotest_common.sh@819 -- # '[' -z 1082848 ']' 00:20:11.970 08:12:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:11.970 08:12:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:11.970 08:12:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:11.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:11.970 08:12:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:11.970 08:12:42 -- common/autotest_common.sh@10 -- # set +x 00:20:12.231 [2024-06-11 08:12:42.650397] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:12.231 [2024-06-11 08:12:42.650452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1082848 ] 00:20:12.231 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.231 [2024-06-11 08:12:42.701694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.231 [2024-06-11 08:12:42.752280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.803 08:12:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:12.803 08:12:43 -- common/autotest_common.sh@852 -- # return 0 00:20:12.803 08:12:43 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:13.064 [2024-06-11 08:12:43.541178] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:13.064 TLSTESTn1 00:20:13.064 08:12:43 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:13.326 08:12:43 -- target/tls.sh@205 -- # tgtconf='{ 00:20:13.326 "subsystems": [ 00:20:13.326 { 00:20:13.326 "subsystem": "iobuf", 00:20:13.326 "config": [ 00:20:13.326 { 00:20:13.326 "method": "iobuf_set_options", 00:20:13.326 "params": { 00:20:13.326 "small_pool_count": 8192, 00:20:13.326 "large_pool_count": 1024, 00:20:13.326 "small_bufsize": 8192, 00:20:13.326 "large_bufsize": 135168 00:20:13.326 } 00:20:13.326 } 00:20:13.326 ] 00:20:13.326 }, 00:20:13.326 { 00:20:13.326 "subsystem": "sock", 00:20:13.326 "config": [ 00:20:13.326 { 00:20:13.326 "method": "sock_impl_set_options", 00:20:13.326 "params": { 00:20:13.326 "impl_name": "posix", 00:20:13.326 "recv_buf_size": 2097152, 00:20:13.326 "send_buf_size": 2097152, 00:20:13.326 "enable_recv_pipe": true, 00:20:13.326 "enable_quickack": false, 00:20:13.326 "enable_placement_id": 0, 00:20:13.326 "enable_zerocopy_send_server": true, 00:20:13.326 "enable_zerocopy_send_client": false, 00:20:13.326 "zerocopy_threshold": 0, 00:20:13.326 "tls_version": 0, 00:20:13.326 "enable_ktls": false 00:20:13.326 } 00:20:13.326 }, 00:20:13.326 { 00:20:13.326 "method": "sock_impl_set_options", 00:20:13.326 "params": { 00:20:13.326 "impl_name": "ssl", 00:20:13.326 "recv_buf_size": 4096, 00:20:13.326 "send_buf_size": 4096, 00:20:13.326 "enable_recv_pipe": true, 00:20:13.326 "enable_quickack": false, 00:20:13.326 "enable_placement_id": 0, 00:20:13.326 "enable_zerocopy_send_server": true, 00:20:13.326 "enable_zerocopy_send_client": false, 00:20:13.326 "zerocopy_threshold": 0, 00:20:13.326 "tls_version": 0, 00:20:13.326 "enable_ktls": false 00:20:13.327 } 00:20:13.327 } 00:20:13.327 ] 00:20:13.327 }, 00:20:13.327 { 00:20:13.327 "subsystem": "vmd", 00:20:13.327 "config": [] 00:20:13.327 }, 00:20:13.327 { 00:20:13.327 "subsystem": "accel", 00:20:13.327 "config": [ 00:20:13.327 { 00:20:13.327 "method": "accel_set_options", 00:20:13.327 "params": { 00:20:13.327 "small_cache_size": 128, 00:20:13.327 "large_cache_size": 16, 00:20:13.327 "task_count": 2048, 00:20:13.327 "sequence_count": 2048, 00:20:13.327 "buf_count": 2048 00:20:13.327 } 00:20:13.327 } 00:20:13.327 ] 00:20:13.327 }, 00:20:13.327 { 00:20:13.327 "subsystem": "bdev", 00:20:13.327 "config": [ 00:20:13.327 { 00:20:13.327 "method": "bdev_set_options", 00:20:13.327 "params": { 00:20:13.327 "bdev_io_pool_size": 65535, 00:20:13.327 "bdev_io_cache_size": 256, 00:20:13.327 "bdev_auto_examine": true, 00:20:13.327 "iobuf_small_cache_size": 128, 00:20:13.327 "iobuf_large_cache_size": 16 00:20:13.327 } 00:20:13.327 }, 00:20:13.327 { 00:20:13.327 "method": "bdev_raid_set_options", 00:20:13.327 "params": { 00:20:13.327 "process_window_size_kb": 1024 00:20:13.327 } 00:20:13.327 }, 00:20:13.327 { 00:20:13.327 "method": "bdev_iscsi_set_options", 00:20:13.327 "params": { 00:20:13.327 "timeout_sec": 30 00:20:13.327 } 00:20:13.327 }, 00:20:13.327 { 00:20:13.327 "method": "bdev_nvme_set_options", 00:20:13.327 "params": { 00:20:13.327 "action_on_timeout": "none", 00:20:13.327 "timeout_us": 0, 00:20:13.327 "timeout_admin_us": 0, 00:20:13.327 "keep_alive_timeout_ms": 10000, 00:20:13.327 "transport_retry_count": 4, 00:20:13.327 "arbitration_burst": 0, 00:20:13.327 "low_priority_weight": 0, 00:20:13.327 "medium_priority_weight": 0, 00:20:13.327 "high_priority_weight": 0, 00:20:13.327 "nvme_adminq_poll_period_us": 10000, 00:20:13.327 "nvme_ioq_poll_period_us": 0, 00:20:13.327 "io_queue_requests": 0, 00:20:13.327 "delay_cmd_submit": true, 00:20:13.327 "bdev_retry_count": 3, 00:20:13.327 "transport_ack_timeout": 0, 00:20:13.327 "ctrlr_loss_timeout_sec": 0, 00:20:13.327 "reconnect_delay_sec": 0, 00:20:13.327 "fast_io_fail_timeout_sec": 0, 00:20:13.327 "generate_uuids": false, 00:20:13.327 "transport_tos": 0, 00:20:13.327 "io_path_stat": false, 00:20:13.327 "allow_accel_sequence": false 00:20:13.327 } 00:20:13.327 }, 00:20:13.327 { 00:20:13.327 "method": "bdev_nvme_set_hotplug", 00:20:13.327 "params": { 00:20:13.327 "period_us": 100000, 00:20:13.327 "enable": false 00:20:13.327 } 00:20:13.327 }, 00:20:13.327 { 00:20:13.327 "method": "bdev_malloc_create", 00:20:13.327 "params": { 00:20:13.327 "name": "malloc0", 00:20:13.327 "num_blocks": 8192, 00:20:13.327 "block_size": 4096, 00:20:13.327 "physical_block_size": 4096, 00:20:13.327 "uuid": "b5353351-b5b2-4abc-b7ee-bd1d7d448abf", 00:20:13.327 "optimal_io_boundary": 0 00:20:13.327 } 00:20:13.327 }, 00:20:13.327 { 00:20:13.327 "method": "bdev_wait_for_examine" 00:20:13.327 } 00:20:13.327 ] 00:20:13.327 }, 00:20:13.327 { 00:20:13.327 "subsystem": "nbd", 00:20:13.327 "config": [] 00:20:13.327 }, 00:20:13.327 { 00:20:13.327 "subsystem": "scheduler", 00:20:13.327 "config": [ 00:20:13.327 { 00:20:13.327 "method": "framework_set_scheduler", 00:20:13.327 "params": { 00:20:13.327 "name": "static" 00:20:13.327 } 00:20:13.327 } 00:20:13.327 ] 00:20:13.327 }, 00:20:13.327 { 00:20:13.327 "subsystem": "nvmf", 00:20:13.327 "config": [ 00:20:13.327 { 00:20:13.327 "method": "nvmf_set_config", 00:20:13.327 "params": { 00:20:13.327 "discovery_filter": "match_any", 00:20:13.327 "admin_cmd_passthru": { 00:20:13.327 "identify_ctrlr": false 00:20:13.327 } 00:20:13.327 } 00:20:13.327 }, 00:20:13.327 { 00:20:13.327 "method": "nvmf_set_max_subsystems", 00:20:13.327 "params": { 00:20:13.327 "max_subsystems": 1024 00:20:13.327 } 00:20:13.327 }, 00:20:13.327 { 00:20:13.327 "method": "nvmf_set_crdt", 00:20:13.327 "params": { 00:20:13.327 "crdt1": 0, 00:20:13.327 "crdt2": 0, 00:20:13.327 "crdt3": 0 00:20:13.327 } 00:20:13.327 }, 00:20:13.327 { 00:20:13.327 "method": "nvmf_create_transport", 00:20:13.327 "params": { 00:20:13.327 "trtype": "TCP", 00:20:13.327 "max_queue_depth": 128, 00:20:13.327 "max_io_qpairs_per_ctrlr": 127, 00:20:13.327 "in_capsule_data_size": 4096, 00:20:13.327 "max_io_size": 131072, 00:20:13.327 "io_unit_size": 131072, 00:20:13.327 "max_aq_depth": 128, 00:20:13.327 "num_shared_buffers": 511, 00:20:13.327 "buf_cache_size": 4294967295, 00:20:13.327 "dif_insert_or_strip": false, 00:20:13.327 "zcopy": false, 00:20:13.327 "c2h_success": false, 00:20:13.327 "sock_priority": 0, 00:20:13.327 "abort_timeout_sec": 1 00:20:13.327 } 00:20:13.327 }, 00:20:13.327 { 00:20:13.327 "method": "nvmf_create_subsystem", 00:20:13.327 "params": { 00:20:13.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.327 "allow_any_host": false, 00:20:13.327 "serial_number": "SPDK00000000000001", 00:20:13.327 "model_number": "SPDK bdev Controller", 00:20:13.327 "max_namespaces": 10, 00:20:13.327 "min_cntlid": 1, 00:20:13.327 "max_cntlid": 65519, 00:20:13.327 "ana_reporting": false 00:20:13.327 } 00:20:13.327 }, 00:20:13.327 { 00:20:13.327 "method": "nvmf_subsystem_add_host", 00:20:13.327 "params": { 00:20:13.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.327 "host": "nqn.2016-06.io.spdk:host1", 00:20:13.327 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:13.327 } 00:20:13.327 }, 00:20:13.327 { 00:20:13.327 "method": "nvmf_subsystem_add_ns", 00:20:13.327 "params": { 00:20:13.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.327 "namespace": { 00:20:13.327 "nsid": 1, 00:20:13.327 "bdev_name": "malloc0", 00:20:13.327 "nguid": "B5353351B5B24ABCB7EEBD1D7D448ABF", 00:20:13.327 "uuid": "b5353351-b5b2-4abc-b7ee-bd1d7d448abf" 00:20:13.327 } 00:20:13.327 } 00:20:13.327 }, 00:20:13.327 { 00:20:13.327 "method": "nvmf_subsystem_add_listener", 00:20:13.327 "params": { 00:20:13.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.327 "listen_address": { 00:20:13.327 "trtype": "TCP", 00:20:13.327 "adrfam": "IPv4", 00:20:13.327 "traddr": "10.0.0.2", 00:20:13.327 "trsvcid": "4420" 00:20:13.327 }, 00:20:13.327 "secure_channel": true 00:20:13.327 } 00:20:13.327 } 00:20:13.327 ] 00:20:13.327 } 00:20:13.327 ] 00:20:13.327 }' 00:20:13.327 08:12:43 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:13.589 08:12:44 -- target/tls.sh@206 -- # bdevperfconf='{ 00:20:13.589 "subsystems": [ 00:20:13.589 { 00:20:13.589 "subsystem": "iobuf", 00:20:13.589 "config": [ 00:20:13.589 { 00:20:13.589 "method": "iobuf_set_options", 00:20:13.589 "params": { 00:20:13.589 "small_pool_count": 8192, 00:20:13.589 "large_pool_count": 1024, 00:20:13.589 "small_bufsize": 8192, 00:20:13.589 "large_bufsize": 135168 00:20:13.589 } 00:20:13.589 } 00:20:13.589 ] 00:20:13.589 }, 00:20:13.589 { 00:20:13.589 "subsystem": "sock", 00:20:13.589 "config": [ 00:20:13.589 { 00:20:13.589 "method": "sock_impl_set_options", 00:20:13.589 "params": { 00:20:13.589 "impl_name": "posix", 00:20:13.589 "recv_buf_size": 2097152, 00:20:13.589 "send_buf_size": 2097152, 00:20:13.589 "enable_recv_pipe": true, 00:20:13.589 "enable_quickack": false, 00:20:13.589 "enable_placement_id": 0, 00:20:13.589 "enable_zerocopy_send_server": true, 00:20:13.589 "enable_zerocopy_send_client": false, 00:20:13.589 "zerocopy_threshold": 0, 00:20:13.589 "tls_version": 0, 00:20:13.589 "enable_ktls": false 00:20:13.589 } 00:20:13.589 }, 00:20:13.589 { 00:20:13.589 "method": "sock_impl_set_options", 00:20:13.589 "params": { 00:20:13.589 "impl_name": "ssl", 00:20:13.589 "recv_buf_size": 4096, 00:20:13.589 "send_buf_size": 4096, 00:20:13.589 "enable_recv_pipe": true, 00:20:13.589 "enable_quickack": false, 00:20:13.589 "enable_placement_id": 0, 00:20:13.589 "enable_zerocopy_send_server": true, 00:20:13.589 "enable_zerocopy_send_client": false, 00:20:13.589 "zerocopy_threshold": 0, 00:20:13.589 "tls_version": 0, 00:20:13.589 "enable_ktls": false 00:20:13.589 } 00:20:13.589 } 00:20:13.589 ] 00:20:13.589 }, 00:20:13.589 { 00:20:13.589 "subsystem": "vmd", 00:20:13.589 "config": [] 00:20:13.589 }, 00:20:13.589 { 00:20:13.589 "subsystem": "accel", 00:20:13.589 "config": [ 00:20:13.589 { 00:20:13.589 "method": "accel_set_options", 00:20:13.589 "params": { 00:20:13.589 "small_cache_size": 128, 00:20:13.589 "large_cache_size": 16, 00:20:13.589 "task_count": 2048, 00:20:13.589 "sequence_count": 2048, 00:20:13.589 "buf_count": 2048 00:20:13.589 } 00:20:13.589 } 00:20:13.589 ] 00:20:13.589 }, 00:20:13.589 { 00:20:13.589 "subsystem": "bdev", 00:20:13.589 "config": [ 00:20:13.589 { 00:20:13.589 "method": "bdev_set_options", 00:20:13.589 "params": { 00:20:13.589 "bdev_io_pool_size": 65535, 00:20:13.589 "bdev_io_cache_size": 256, 00:20:13.589 "bdev_auto_examine": true, 00:20:13.589 "iobuf_small_cache_size": 128, 00:20:13.589 "iobuf_large_cache_size": 16 00:20:13.589 } 00:20:13.589 }, 00:20:13.589 { 00:20:13.589 "method": "bdev_raid_set_options", 00:20:13.589 "params": { 00:20:13.589 "process_window_size_kb": 1024 00:20:13.589 } 00:20:13.589 }, 00:20:13.589 { 00:20:13.589 "method": "bdev_iscsi_set_options", 00:20:13.589 "params": { 00:20:13.589 "timeout_sec": 30 00:20:13.589 } 00:20:13.589 }, 00:20:13.589 { 00:20:13.589 "method": "bdev_nvme_set_options", 00:20:13.589 "params": { 00:20:13.589 "action_on_timeout": "none", 00:20:13.589 "timeout_us": 0, 00:20:13.589 "timeout_admin_us": 0, 00:20:13.589 "keep_alive_timeout_ms": 10000, 00:20:13.589 "transport_retry_count": 4, 00:20:13.589 "arbitration_burst": 0, 00:20:13.589 "low_priority_weight": 0, 00:20:13.589 "medium_priority_weight": 0, 00:20:13.589 "high_priority_weight": 0, 00:20:13.589 "nvme_adminq_poll_period_us": 10000, 00:20:13.589 "nvme_ioq_poll_period_us": 0, 00:20:13.589 "io_queue_requests": 512, 00:20:13.589 "delay_cmd_submit": true, 00:20:13.589 "bdev_retry_count": 3, 00:20:13.589 "transport_ack_timeout": 0, 00:20:13.589 "ctrlr_loss_timeout_sec": 0, 00:20:13.589 "reconnect_delay_sec": 0, 00:20:13.589 "fast_io_fail_timeout_sec": 0, 00:20:13.589 "generate_uuids": false, 00:20:13.589 "transport_tos": 0, 00:20:13.589 "io_path_stat": false, 00:20:13.589 "allow_accel_sequence": false 00:20:13.589 } 00:20:13.589 }, 00:20:13.589 { 00:20:13.589 "method": "bdev_nvme_attach_controller", 00:20:13.589 "params": { 00:20:13.589 "name": "TLSTEST", 00:20:13.589 "trtype": "TCP", 00:20:13.589 "adrfam": "IPv4", 00:20:13.589 "traddr": "10.0.0.2", 00:20:13.589 "trsvcid": "4420", 00:20:13.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.589 "prchk_reftag": false, 00:20:13.589 "prchk_guard": false, 00:20:13.589 "ctrlr_loss_timeout_sec": 0, 00:20:13.589 "reconnect_delay_sec": 0, 00:20:13.589 "fast_io_fail_timeout_sec": 0, 00:20:13.589 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:13.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:13.589 "hdgst": false, 00:20:13.589 "ddgst": false 00:20:13.589 } 00:20:13.589 }, 00:20:13.589 { 00:20:13.589 "method": "bdev_nvme_set_hotplug", 00:20:13.589 "params": { 00:20:13.589 "period_us": 100000, 00:20:13.589 "enable": false 00:20:13.589 } 00:20:13.589 }, 00:20:13.589 { 00:20:13.589 "method": "bdev_wait_for_examine" 00:20:13.589 } 00:20:13.589 ] 00:20:13.589 }, 00:20:13.589 { 00:20:13.589 "subsystem": "nbd", 00:20:13.589 "config": [] 00:20:13.589 } 00:20:13.589 ] 00:20:13.589 }' 00:20:13.589 08:12:44 -- target/tls.sh@208 -- # killprocess 1082848 00:20:13.589 08:12:44 -- common/autotest_common.sh@926 -- # '[' -z 1082848 ']' 00:20:13.589 08:12:44 -- common/autotest_common.sh@930 -- # kill -0 1082848 00:20:13.589 08:12:44 -- common/autotest_common.sh@931 -- # uname 00:20:13.589 08:12:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:13.589 08:12:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1082848 00:20:13.589 08:12:44 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:13.589 08:12:44 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:13.589 08:12:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1082848' 00:20:13.589 killing process with pid 1082848 00:20:13.589 08:12:44 -- common/autotest_common.sh@945 -- # kill 1082848 00:20:13.590 Received shutdown signal, test time was about 10.000000 seconds 00:20:13.590 00:20:13.590 Latency(us) 00:20:13.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.590 =================================================================================================================== 00:20:13.590 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:13.590 08:12:44 -- common/autotest_common.sh@950 -- # wait 1082848 00:20:13.851 08:12:44 -- target/tls.sh@209 -- # killprocess 1082480 00:20:13.851 08:12:44 -- common/autotest_common.sh@926 -- # '[' -z 1082480 ']' 00:20:13.851 08:12:44 -- common/autotest_common.sh@930 -- # kill -0 1082480 00:20:13.851 08:12:44 -- common/autotest_common.sh@931 -- # uname 00:20:13.851 08:12:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:13.851 08:12:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1082480 00:20:13.851 08:12:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:13.851 08:12:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:13.851 08:12:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1082480' 00:20:13.851 killing process with pid 1082480 00:20:13.851 08:12:44 -- common/autotest_common.sh@945 -- # kill 1082480 00:20:13.851 08:12:44 -- common/autotest_common.sh@950 -- # wait 1082480 00:20:13.851 08:12:44 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:13.851 08:12:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:13.851 08:12:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:13.851 08:12:44 -- common/autotest_common.sh@10 -- # set +x 00:20:13.851 08:12:44 -- target/tls.sh@212 -- # echo '{ 00:20:13.851 "subsystems": [ 00:20:13.851 { 00:20:13.851 "subsystem": "iobuf", 00:20:13.851 "config": [ 00:20:13.851 { 00:20:13.851 "method": "iobuf_set_options", 00:20:13.851 "params": { 00:20:13.851 "small_pool_count": 8192, 00:20:13.851 "large_pool_count": 1024, 00:20:13.851 "small_bufsize": 8192, 00:20:13.851 "large_bufsize": 135168 00:20:13.851 } 00:20:13.851 } 00:20:13.851 ] 00:20:13.851 }, 00:20:13.851 { 00:20:13.851 "subsystem": "sock", 00:20:13.851 "config": [ 00:20:13.851 { 00:20:13.851 "method": "sock_impl_set_options", 00:20:13.851 "params": { 00:20:13.851 "impl_name": "posix", 00:20:13.851 "recv_buf_size": 2097152, 00:20:13.851 "send_buf_size": 2097152, 00:20:13.851 "enable_recv_pipe": true, 00:20:13.851 "enable_quickack": false, 00:20:13.851 "enable_placement_id": 0, 00:20:13.851 "enable_zerocopy_send_server": true, 00:20:13.851 "enable_zerocopy_send_client": false, 00:20:13.851 "zerocopy_threshold": 0, 00:20:13.851 "tls_version": 0, 00:20:13.851 "enable_ktls": false 00:20:13.851 } 00:20:13.851 }, 00:20:13.851 { 00:20:13.851 "method": "sock_impl_set_options", 00:20:13.851 "params": { 00:20:13.851 "impl_name": "ssl", 00:20:13.851 "recv_buf_size": 4096, 00:20:13.851 "send_buf_size": 4096, 00:20:13.851 "enable_recv_pipe": true, 00:20:13.851 "enable_quickack": false, 00:20:13.851 "enable_placement_id": 0, 00:20:13.851 "enable_zerocopy_send_server": true, 00:20:13.851 "enable_zerocopy_send_client": false, 00:20:13.851 "zerocopy_threshold": 0, 00:20:13.851 "tls_version": 0, 00:20:13.851 "enable_ktls": false 00:20:13.851 } 00:20:13.851 } 00:20:13.851 ] 00:20:13.851 }, 00:20:13.851 { 00:20:13.851 "subsystem": "vmd", 00:20:13.851 "config": [] 00:20:13.851 }, 00:20:13.851 { 00:20:13.851 "subsystem": "accel", 00:20:13.851 "config": [ 00:20:13.851 { 00:20:13.851 "method": "accel_set_options", 00:20:13.852 "params": { 00:20:13.852 "small_cache_size": 128, 00:20:13.852 "large_cache_size": 16, 00:20:13.852 "task_count": 2048, 00:20:13.852 "sequence_count": 2048, 00:20:13.852 "buf_count": 2048 00:20:13.852 } 00:20:13.852 } 00:20:13.852 ] 00:20:13.852 }, 00:20:13.852 { 00:20:13.852 "subsystem": "bdev", 00:20:13.852 "config": [ 00:20:13.852 { 00:20:13.852 "method": "bdev_set_options", 00:20:13.852 "params": { 00:20:13.852 "bdev_io_pool_size": 65535, 00:20:13.852 "bdev_io_cache_size": 256, 00:20:13.852 "bdev_auto_examine": true, 00:20:13.852 "iobuf_small_cache_size": 128, 00:20:13.852 "iobuf_large_cache_size": 16 00:20:13.852 } 00:20:13.852 }, 00:20:13.852 { 00:20:13.852 "method": "bdev_raid_set_options", 00:20:13.852 "params": { 00:20:13.852 "process_window_size_kb": 1024 00:20:13.852 } 00:20:13.852 }, 00:20:13.852 { 00:20:13.852 "method": "bdev_iscsi_set_options", 00:20:13.852 "params": { 00:20:13.852 "timeout_sec": 30 00:20:13.852 } 00:20:13.852 }, 00:20:13.852 { 00:20:13.852 "method": "bdev_nvme_set_options", 00:20:13.852 "params": { 00:20:13.852 "action_on_timeout": "none", 00:20:13.852 "timeout_us": 0, 00:20:13.852 "timeout_admin_us": 0, 00:20:13.852 "keep_alive_timeout_ms": 10000, 00:20:13.852 "transport_retry_count": 4, 00:20:13.852 "arbitration_burst": 0, 00:20:13.852 "low_priority_weight": 0, 00:20:13.852 "medium_priority_weight": 0, 00:20:13.852 "high_priority_weight": 0, 00:20:13.852 "nvme_adminq_poll_period_us": 10000, 00:20:13.852 "nvme_ioq_poll_period_us": 0, 00:20:13.852 "io_queue_requests": 0, 00:20:13.852 "delay_cmd_submit": true, 00:20:13.852 "bdev_retry_count": 3, 00:20:13.852 "transport_ack_timeout": 0, 00:20:13.852 "ctrlr_loss_timeout_sec": 0, 00:20:13.852 "reconnect_delay_sec": 0, 00:20:13.852 "fast_io_fail_timeout_sec": 0, 00:20:13.852 "generate_uuids": false, 00:20:13.852 "transport_tos": 0, 00:20:13.852 "io_path_stat": false, 00:20:13.852 "allow_accel_sequence": false 00:20:13.852 } 00:20:13.852 }, 00:20:13.852 { 00:20:13.852 "method": "bdev_nvme_set_hotplug", 00:20:13.852 "params": { 00:20:13.852 "period_us": 100000, 00:20:13.852 "enable": false 00:20:13.852 } 00:20:13.852 }, 00:20:13.852 { 00:20:13.852 "method": "bdev_malloc_create", 00:20:13.852 "params": { 00:20:13.852 "name": "malloc0", 00:20:13.852 "num_blocks": 8192, 00:20:13.852 "block_size": 4096, 00:20:13.852 "physical_block_size": 4096, 00:20:13.852 "uuid": "b5353351-b5b2-4abc-b7ee-bd1d7d448abf", 00:20:13.852 "optimal_io_boundary": 0 00:20:13.852 } 00:20:13.852 }, 00:20:13.852 { 00:20:13.852 "method": "bdev_wait_for_examine" 00:20:13.852 } 00:20:13.852 ] 00:20:13.852 }, 00:20:13.852 { 00:20:13.852 "subsystem": "nbd", 00:20:13.852 "config": [] 00:20:13.852 }, 00:20:13.852 { 00:20:13.852 "subsystem": "scheduler", 00:20:13.852 "config": [ 00:20:13.852 { 00:20:13.852 "method": "framework_set_scheduler", 00:20:13.852 "params": { 00:20:13.852 "name": "static" 00:20:13.852 } 00:20:13.852 } 00:20:13.852 ] 00:20:13.852 }, 00:20:13.852 { 00:20:13.852 "subsystem": "nvmf", 00:20:13.852 "config": [ 00:20:13.852 { 00:20:13.852 "method": "nvmf_set_config", 00:20:13.852 "params": { 00:20:13.852 "discovery_filter": "match_any", 00:20:13.852 "admin_cmd_passthru": { 00:20:13.852 "identify_ctrlr": false 00:20:13.852 } 00:20:13.852 } 00:20:13.852 }, 00:20:13.852 { 00:20:13.852 "method": "nvmf_set_max_subsystems", 00:20:13.852 "params": { 00:20:13.852 "max_subsystems": 1024 00:20:13.852 } 00:20:13.852 }, 00:20:13.852 { 00:20:13.852 "method": "nvmf_set_crdt", 00:20:13.852 "params": { 00:20:13.852 "crdt1": 0, 00:20:13.852 "crdt2": 0, 00:20:13.852 "crdt3": 0 00:20:13.852 } 00:20:13.852 }, 00:20:13.852 { 00:20:13.852 "method": "nvmf_create_transport", 00:20:13.852 "params": { 00:20:13.852 "trtype": "TCP", 00:20:13.852 "max_queue_depth": 128, 00:20:13.852 "max_io_qpairs_per_ctrlr": 127, 00:20:13.852 "in_capsule_data_size": 4096, 00:20:13.852 "max_io_size": 131072, 00:20:13.852 "io_unit_size": 131072, 00:20:13.852 "max_aq_depth": 128, 00:20:13.852 "num_shared_buffers": 511, 00:20:13.852 "buf_cache_size": 4294967295, 00:20:13.852 "dif_insert_or_strip": false, 00:20:13.852 "zcopy": false, 00:20:13.852 "c2h_success": false, 00:20:13.852 "sock_priority": 0, 00:20:13.852 "abort_timeout_sec": 1 00:20:13.852 } 00:20:13.852 }, 00:20:13.852 { 00:20:13.852 "method": "nvmf_create_subsystem", 00:20:13.852 "params": { 00:20:13.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.852 "allow_any_host": false, 00:20:13.852 "serial_number": "SPDK00000000000001", 00:20:13.852 "model_number": "SPDK bdev Controller", 00:20:13.852 "max_namespaces": 10, 00:20:13.852 "min_cntlid": 1, 00:20:13.852 "max_cntlid": 65519, 00:20:13.852 "ana_reporting": false 00:20:13.852 } 00:20:13.852 }, 00:20:13.852 { 00:20:13.852 "method": "nvmf_subsystem_add_host", 00:20:13.852 "params": { 00:20:13.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.852 "host": "nqn.2016-06.io.spdk:host1", 00:20:13.852 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:13.852 } 00:20:13.852 }, 00:20:13.852 { 00:20:13.852 "method": "nvmf_subsystem_add_ns", 00:20:13.852 "params": { 00:20:13.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.852 "namespace": { 00:20:13.852 "nsid": 1, 00:20:13.852 "bdev_name": "malloc0", 00:20:13.852 "nguid": "B5353351B5B24ABCB7EEBD1D7D448ABF", 00:20:13.852 "uuid": "b5353351-b5b2-4abc-b7ee-bd1d7d448abf" 00:20:13.852 } 00:20:13.852 } 00:20:13.852 }, 00:20:13.852 { 00:20:13.852 "method": "nvmf_subsystem_add_listener", 00:20:13.852 "params": { 00:20:13.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.852 "listen_address": { 00:20:13.852 "trtype": "TCP", 00:20:13.852 "adrfam": "IPv4", 00:20:13.852 "traddr": "10.0.0.2", 00:20:13.852 "trsvcid": "4420" 00:20:13.852 }, 00:20:13.852 "secure_channel": true 00:20:13.852 } 00:20:13.852 } 00:20:13.852 ] 00:20:13.852 } 00:20:13.852 ] 00:20:13.852 }' 00:20:13.852 08:12:44 -- nvmf/common.sh@469 -- # nvmfpid=1083258 00:20:13.852 08:12:44 -- nvmf/common.sh@470 -- # waitforlisten 1083258 00:20:13.852 08:12:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:13.852 08:12:44 -- common/autotest_common.sh@819 -- # '[' -z 1083258 ']' 00:20:13.852 08:12:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.852 08:12:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:13.852 08:12:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.852 08:12:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:13.852 08:12:44 -- common/autotest_common.sh@10 -- # set +x 00:20:13.852 [2024-06-11 08:12:44.478217] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:13.852 [2024-06-11 08:12:44.478270] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.114 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.114 [2024-06-11 08:12:44.559786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.114 [2024-06-11 08:12:44.612466] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:14.114 [2024-06-11 08:12:44.612558] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.114 [2024-06-11 08:12:44.612564] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.114 [2024-06-11 08:12:44.612569] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.114 [2024-06-11 08:12:44.612582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.375 [2024-06-11 08:12:44.787513] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.375 [2024-06-11 08:12:44.819543] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:14.375 [2024-06-11 08:12:44.819711] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.636 08:12:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:14.636 08:12:45 -- common/autotest_common.sh@852 -- # return 0 00:20:14.636 08:12:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:14.636 08:12:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:14.636 08:12:45 -- common/autotest_common.sh@10 -- # set +x 00:20:14.636 08:12:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.636 08:12:45 -- target/tls.sh@216 -- # bdevperf_pid=1083341 00:20:14.636 08:12:45 -- target/tls.sh@217 -- # waitforlisten 1083341 /var/tmp/bdevperf.sock 00:20:14.636 08:12:45 -- common/autotest_common.sh@819 -- # '[' -z 1083341 ']' 00:20:14.636 08:12:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.636 08:12:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:14.636 08:12:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.636 08:12:45 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:14.636 08:12:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:14.636 08:12:45 -- common/autotest_common.sh@10 -- # set +x 00:20:14.636 08:12:45 -- target/tls.sh@213 -- # echo '{ 00:20:14.636 "subsystems": [ 00:20:14.636 { 00:20:14.636 "subsystem": "iobuf", 00:20:14.636 "config": [ 00:20:14.636 { 00:20:14.636 "method": "iobuf_set_options", 00:20:14.636 "params": { 00:20:14.636 "small_pool_count": 8192, 00:20:14.636 "large_pool_count": 1024, 00:20:14.636 "small_bufsize": 8192, 00:20:14.636 "large_bufsize": 135168 00:20:14.636 } 00:20:14.636 } 00:20:14.636 ] 00:20:14.636 }, 00:20:14.636 { 00:20:14.636 "subsystem": "sock", 00:20:14.636 "config": [ 00:20:14.636 { 00:20:14.636 "method": "sock_impl_set_options", 00:20:14.636 "params": { 00:20:14.636 "impl_name": "posix", 00:20:14.636 "recv_buf_size": 2097152, 00:20:14.636 "send_buf_size": 2097152, 00:20:14.636 "enable_recv_pipe": true, 00:20:14.636 "enable_quickack": false, 00:20:14.636 "enable_placement_id": 0, 00:20:14.636 "enable_zerocopy_send_server": true, 00:20:14.636 "enable_zerocopy_send_client": false, 00:20:14.636 "zerocopy_threshold": 0, 00:20:14.636 "tls_version": 0, 00:20:14.636 "enable_ktls": false 00:20:14.636 } 00:20:14.636 }, 00:20:14.636 { 00:20:14.636 "method": "sock_impl_set_options", 00:20:14.636 "params": { 00:20:14.636 "impl_name": "ssl", 00:20:14.636 "recv_buf_size": 4096, 00:20:14.636 "send_buf_size": 4096, 00:20:14.636 "enable_recv_pipe": true, 00:20:14.636 "enable_quickack": false, 00:20:14.636 "enable_placement_id": 0, 00:20:14.636 "enable_zerocopy_send_server": true, 00:20:14.636 "enable_zerocopy_send_client": false, 00:20:14.636 "zerocopy_threshold": 0, 00:20:14.636 "tls_version": 0, 00:20:14.636 "enable_ktls": false 00:20:14.636 } 00:20:14.636 } 00:20:14.636 ] 00:20:14.636 }, 00:20:14.636 { 00:20:14.636 "subsystem": "vmd", 00:20:14.636 "config": [] 00:20:14.636 }, 00:20:14.636 { 00:20:14.636 "subsystem": "accel", 00:20:14.636 "config": [ 00:20:14.636 { 00:20:14.636 "method": "accel_set_options", 00:20:14.636 "params": { 00:20:14.636 "small_cache_size": 128, 00:20:14.636 "large_cache_size": 16, 00:20:14.636 "task_count": 2048, 00:20:14.636 "sequence_count": 2048, 00:20:14.636 "buf_count": 2048 00:20:14.636 } 00:20:14.636 } 00:20:14.636 ] 00:20:14.636 }, 00:20:14.636 { 00:20:14.636 "subsystem": "bdev", 00:20:14.636 "config": [ 00:20:14.636 { 00:20:14.636 "method": "bdev_set_options", 00:20:14.636 "params": { 00:20:14.636 "bdev_io_pool_size": 65535, 00:20:14.636 "bdev_io_cache_size": 256, 00:20:14.636 "bdev_auto_examine": true, 00:20:14.636 "iobuf_small_cache_size": 128, 00:20:14.636 "iobuf_large_cache_size": 16 00:20:14.636 } 00:20:14.636 }, 00:20:14.636 { 00:20:14.636 "method": "bdev_raid_set_options", 00:20:14.636 "params": { 00:20:14.636 "process_window_size_kb": 1024 00:20:14.636 } 00:20:14.636 }, 00:20:14.636 { 00:20:14.636 "method": "bdev_iscsi_set_options", 00:20:14.636 "params": { 00:20:14.636 "timeout_sec": 30 00:20:14.636 } 00:20:14.636 }, 00:20:14.636 { 00:20:14.636 "method": "bdev_nvme_set_options", 00:20:14.636 "params": { 00:20:14.636 "action_on_timeout": "none", 00:20:14.636 "timeout_us": 0, 00:20:14.636 "timeout_admin_us": 0, 00:20:14.636 "keep_alive_timeout_ms": 10000, 00:20:14.636 "transport_retry_count": 4, 00:20:14.636 "arbitration_burst": 0, 00:20:14.636 "low_priority_weight": 0, 00:20:14.636 "medium_priority_weight": 0, 00:20:14.636 "high_priority_weight": 0, 00:20:14.636 "nvme_adminq_poll_period_us": 10000, 00:20:14.636 "nvme_ioq_poll_period_us": 0, 00:20:14.636 "io_queue_requests": 512, 00:20:14.636 "delay_cmd_submit": true, 00:20:14.636 "bdev_retry_count": 3, 00:20:14.636 "transport_ack_timeout": 0, 00:20:14.636 "ctrlr_loss_timeout_sec": 0, 00:20:14.636 "reconnect_delay_sec": 0, 00:20:14.636 "fast_io_fail_timeout_sec": 0, 00:20:14.636 "generate_uuids": false, 00:20:14.636 "transport_tos": 0, 00:20:14.636 "io_path_stat": false, 00:20:14.636 "allow_accel_sequence": false 00:20:14.636 } 00:20:14.636 }, 00:20:14.636 { 00:20:14.636 "method": "bdev_nvme_attach_controller", 00:20:14.636 "params": { 00:20:14.636 "name": "TLSTEST", 00:20:14.636 "trtype": "TCP", 00:20:14.636 "adrfam": "IPv4", 00:20:14.636 "traddr": "10.0.0.2", 00:20:14.636 "trsvcid": "4420", 00:20:14.636 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.636 "prchk_reftag": false, 00:20:14.636 "prchk_guard": false, 00:20:14.636 "ctrlr_loss_timeout_sec": 0, 00:20:14.636 "reconnect_delay_sec": 0, 00:20:14.636 "fast_io_fail_timeout_sec": 0, 00:20:14.636 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:14.636 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:14.637 "hdgst": false, 00:20:14.637 "ddgst": false 00:20:14.637 } 00:20:14.637 }, 00:20:14.637 { 00:20:14.637 "method": "bdev_nvme_set_hotplug", 00:20:14.637 "params": { 00:20:14.637 "period_us": 100000, 00:20:14.637 "enable": false 00:20:14.637 } 00:20:14.637 }, 00:20:14.637 { 00:20:14.637 "method": "bdev_wait_for_examine" 00:20:14.637 } 00:20:14.637 ] 00:20:14.637 }, 00:20:14.637 { 00:20:14.637 "subsystem": "nbd", 00:20:14.637 "config": [] 00:20:14.637 } 00:20:14.637 ] 00:20:14.637 }' 00:20:14.898 [2024-06-11 08:12:45.315334] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:14.898 [2024-06-11 08:12:45.315381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1083341 ] 00:20:14.898 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.898 [2024-06-11 08:12:45.364738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.898 [2024-06-11 08:12:45.415145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.898 [2024-06-11 08:12:45.530941] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:15.469 08:12:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:15.469 08:12:46 -- common/autotest_common.sh@852 -- # return 0 00:20:15.469 08:12:46 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:15.730 Running I/O for 10 seconds... 00:20:25.749 00:20:25.749 Latency(us) 00:20:25.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.749 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:25.749 Verification LBA range: start 0x0 length 0x2000 00:20:25.749 TLSTESTn1 : 10.02 6587.56 25.73 0.00 0.00 19410.12 4096.00 48715.09 00:20:25.749 =================================================================================================================== 00:20:25.749 Total : 6587.56 25.73 0.00 0.00 19410.12 4096.00 48715.09 00:20:25.749 0 00:20:25.749 08:12:56 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:25.749 08:12:56 -- target/tls.sh@223 -- # killprocess 1083341 00:20:25.749 08:12:56 -- common/autotest_common.sh@926 -- # '[' -z 1083341 ']' 00:20:25.749 08:12:56 -- common/autotest_common.sh@930 -- # kill -0 1083341 00:20:25.749 08:12:56 -- common/autotest_common.sh@931 -- # uname 00:20:25.749 08:12:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:25.749 08:12:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1083341 00:20:25.749 08:12:56 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:25.749 08:12:56 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:25.749 08:12:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1083341' 00:20:25.749 killing process with pid 1083341 00:20:25.749 08:12:56 -- common/autotest_common.sh@945 -- # kill 1083341 00:20:25.749 Received shutdown signal, test time was about 10.000000 seconds 00:20:25.749 00:20:25.749 Latency(us) 00:20:25.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.749 =================================================================================================================== 00:20:25.749 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:25.749 08:12:56 -- common/autotest_common.sh@950 -- # wait 1083341 00:20:25.749 08:12:56 -- target/tls.sh@224 -- # killprocess 1083258 00:20:25.749 08:12:56 -- common/autotest_common.sh@926 -- # '[' -z 1083258 ']' 00:20:25.749 08:12:56 -- common/autotest_common.sh@930 -- # kill -0 1083258 00:20:25.749 08:12:56 -- common/autotest_common.sh@931 -- # uname 00:20:25.749 08:12:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:25.749 08:12:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1083258 00:20:26.010 08:12:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:26.010 08:12:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:26.010 08:12:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1083258' 00:20:26.010 killing process with pid 1083258 00:20:26.010 08:12:56 -- common/autotest_common.sh@945 -- # kill 1083258 00:20:26.010 08:12:56 -- common/autotest_common.sh@950 -- # wait 1083258 00:20:26.010 08:12:56 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:20:26.010 08:12:56 -- target/tls.sh@227 -- # cleanup 00:20:26.010 08:12:56 -- target/tls.sh@15 -- # process_shm --id 0 00:20:26.010 08:12:56 -- common/autotest_common.sh@796 -- # type=--id 00:20:26.010 08:12:56 -- common/autotest_common.sh@797 -- # id=0 00:20:26.010 08:12:56 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:20:26.010 08:12:56 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:26.010 08:12:56 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:20:26.010 08:12:56 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:20:26.010 08:12:56 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:20:26.010 08:12:56 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:26.010 nvmf_trace.0 00:20:26.010 08:12:56 -- common/autotest_common.sh@811 -- # return 0 00:20:26.010 08:12:56 -- target/tls.sh@16 -- # killprocess 1083341 00:20:26.010 08:12:56 -- common/autotest_common.sh@926 -- # '[' -z 1083341 ']' 00:20:26.010 08:12:56 -- common/autotest_common.sh@930 -- # kill -0 1083341 00:20:26.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1083341) - No such process 00:20:26.010 08:12:56 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1083341 is not found' 00:20:26.010 Process with pid 1083341 is not found 00:20:26.010 08:12:56 -- target/tls.sh@17 -- # nvmftestfini 00:20:26.010 08:12:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:26.010 08:12:56 -- nvmf/common.sh@116 -- # sync 00:20:26.010 08:12:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:26.010 08:12:56 -- nvmf/common.sh@119 -- # set +e 00:20:26.010 08:12:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:26.010 08:12:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:26.010 rmmod nvme_tcp 00:20:26.010 rmmod nvme_fabrics 00:20:26.010 rmmod nvme_keyring 00:20:26.279 08:12:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:26.279 08:12:56 -- nvmf/common.sh@123 -- # set -e 00:20:26.279 08:12:56 -- nvmf/common.sh@124 -- # return 0 00:20:26.279 08:12:56 -- nvmf/common.sh@477 -- # '[' -n 1083258 ']' 00:20:26.279 08:12:56 -- nvmf/common.sh@478 -- # killprocess 1083258 00:20:26.279 08:12:56 -- common/autotest_common.sh@926 -- # '[' -z 1083258 ']' 00:20:26.279 08:12:56 -- common/autotest_common.sh@930 -- # kill -0 1083258 00:20:26.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1083258) - No such process 00:20:26.279 08:12:56 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1083258 is not found' 00:20:26.279 Process with pid 1083258 is not found 00:20:26.279 08:12:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:26.279 08:12:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:26.279 08:12:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:26.279 08:12:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:26.279 08:12:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:26.279 08:12:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.279 08:12:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.279 08:12:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.196 08:12:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:28.196 08:12:58 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:28.196 00:20:28.196 real 1m11.671s 00:20:28.196 user 1m47.677s 00:20:28.196 sys 0m23.845s 00:20:28.196 08:12:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:28.196 08:12:58 -- common/autotest_common.sh@10 -- # set +x 00:20:28.196 ************************************ 00:20:28.196 END TEST nvmf_tls 00:20:28.196 ************************************ 00:20:28.196 08:12:58 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:28.196 08:12:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:28.196 08:12:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:28.196 08:12:58 -- common/autotest_common.sh@10 -- # set +x 00:20:28.196 ************************************ 00:20:28.196 START TEST nvmf_fips 00:20:28.196 ************************************ 00:20:28.196 08:12:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:28.457 * Looking for test storage... 00:20:28.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:28.457 08:12:58 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.457 08:12:58 -- nvmf/common.sh@7 -- # uname -s 00:20:28.457 08:12:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.458 08:12:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.458 08:12:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.458 08:12:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.458 08:12:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.458 08:12:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.458 08:12:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.458 08:12:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.458 08:12:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.458 08:12:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.458 08:12:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:28.458 08:12:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:28.458 08:12:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.458 08:12:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.458 08:12:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.458 08:12:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.458 08:12:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.458 08:12:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.458 08:12:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.458 08:12:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.458 08:12:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.458 08:12:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.458 08:12:58 -- paths/export.sh@5 -- # export PATH 00:20:28.458 08:12:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.458 08:12:58 -- nvmf/common.sh@46 -- # : 0 00:20:28.458 08:12:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:28.458 08:12:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:28.458 08:12:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:28.458 08:12:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.458 08:12:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.458 08:12:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:28.458 08:12:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:28.458 08:12:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:28.458 08:12:58 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:28.458 08:12:58 -- fips/fips.sh@89 -- # check_openssl_version 00:20:28.458 08:12:58 -- fips/fips.sh@83 -- # local target=3.0.0 00:20:28.458 08:12:58 -- fips/fips.sh@85 -- # openssl version 00:20:28.458 08:12:58 -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:28.458 08:12:58 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:28.458 08:12:58 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:28.458 08:12:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:28.458 08:12:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:28.458 08:12:58 -- scripts/common.sh@335 -- # IFS=.-: 00:20:28.458 08:12:58 -- scripts/common.sh@335 -- # read -ra ver1 00:20:28.458 08:12:58 -- scripts/common.sh@336 -- # IFS=.-: 00:20:28.458 08:12:58 -- scripts/common.sh@336 -- # read -ra ver2 00:20:28.458 08:12:58 -- scripts/common.sh@337 -- # local 'op=>=' 00:20:28.458 08:12:58 -- scripts/common.sh@339 -- # ver1_l=3 00:20:28.458 08:12:58 -- scripts/common.sh@340 -- # ver2_l=3 00:20:28.458 08:12:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:28.458 08:12:58 -- scripts/common.sh@343 -- # case "$op" in 00:20:28.458 08:12:58 -- scripts/common.sh@347 -- # : 1 00:20:28.458 08:12:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:28.458 08:12:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.458 08:12:58 -- scripts/common.sh@364 -- # decimal 3 00:20:28.458 08:12:58 -- scripts/common.sh@352 -- # local d=3 00:20:28.458 08:12:58 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:28.458 08:12:58 -- scripts/common.sh@354 -- # echo 3 00:20:28.458 08:12:58 -- scripts/common.sh@364 -- # ver1[v]=3 00:20:28.458 08:12:58 -- scripts/common.sh@365 -- # decimal 3 00:20:28.458 08:12:58 -- scripts/common.sh@352 -- # local d=3 00:20:28.458 08:12:58 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:28.458 08:12:58 -- scripts/common.sh@354 -- # echo 3 00:20:28.458 08:12:58 -- scripts/common.sh@365 -- # ver2[v]=3 00:20:28.458 08:12:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:28.458 08:12:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:28.458 08:12:58 -- scripts/common.sh@363 -- # (( v++ )) 00:20:28.458 08:12:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.458 08:12:58 -- scripts/common.sh@364 -- # decimal 0 00:20:28.458 08:12:58 -- scripts/common.sh@352 -- # local d=0 00:20:28.458 08:12:58 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:28.458 08:12:58 -- scripts/common.sh@354 -- # echo 0 00:20:28.458 08:12:58 -- scripts/common.sh@364 -- # ver1[v]=0 00:20:28.458 08:12:58 -- scripts/common.sh@365 -- # decimal 0 00:20:28.458 08:12:58 -- scripts/common.sh@352 -- # local d=0 00:20:28.458 08:12:58 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:28.458 08:12:58 -- scripts/common.sh@354 -- # echo 0 00:20:28.458 08:12:58 -- scripts/common.sh@365 -- # ver2[v]=0 00:20:28.458 08:12:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:28.458 08:12:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:28.458 08:12:58 -- scripts/common.sh@363 -- # (( v++ )) 00:20:28.458 08:12:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.458 08:12:58 -- scripts/common.sh@364 -- # decimal 9 00:20:28.458 08:12:58 -- scripts/common.sh@352 -- # local d=9 00:20:28.458 08:12:58 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:28.458 08:12:58 -- scripts/common.sh@354 -- # echo 9 00:20:28.458 08:12:58 -- scripts/common.sh@364 -- # ver1[v]=9 00:20:28.458 08:12:58 -- scripts/common.sh@365 -- # decimal 0 00:20:28.458 08:12:58 -- scripts/common.sh@352 -- # local d=0 00:20:28.458 08:12:58 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:28.458 08:12:58 -- scripts/common.sh@354 -- # echo 0 00:20:28.458 08:12:58 -- scripts/common.sh@365 -- # ver2[v]=0 00:20:28.458 08:12:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:28.458 08:12:58 -- scripts/common.sh@366 -- # return 0 00:20:28.458 08:12:58 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:28.458 08:12:58 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:28.458 08:12:58 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:28.458 08:12:58 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:28.458 08:12:58 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:28.458 08:12:58 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:28.458 08:12:58 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:28.458 08:12:58 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:20:28.458 08:12:58 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:20:28.458 08:12:58 -- fips/fips.sh@114 -- # build_openssl_config 00:20:28.458 08:12:58 -- fips/fips.sh@37 -- # cat 00:20:28.458 08:12:58 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:28.458 08:12:58 -- fips/fips.sh@58 -- # cat - 00:20:28.458 08:12:58 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:28.458 08:12:58 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:28.458 08:12:58 -- fips/fips.sh@117 -- # mapfile -t providers 00:20:28.458 08:12:59 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:20:28.458 08:12:59 -- fips/fips.sh@117 -- # openssl list -providers 00:20:28.458 08:12:59 -- fips/fips.sh@117 -- # grep name 00:20:28.458 08:12:59 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:28.458 08:12:59 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:28.458 08:12:59 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:28.458 08:12:59 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:28.458 08:12:59 -- common/autotest_common.sh@640 -- # local es=0 00:20:28.458 08:12:59 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:28.458 08:12:59 -- fips/fips.sh@128 -- # : 00:20:28.458 08:12:59 -- common/autotest_common.sh@628 -- # local arg=openssl 00:20:28.458 08:12:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:28.458 08:12:59 -- common/autotest_common.sh@632 -- # type -t openssl 00:20:28.458 08:12:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:28.458 08:12:59 -- common/autotest_common.sh@634 -- # type -P openssl 00:20:28.458 08:12:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:28.458 08:12:59 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:20:28.458 08:12:59 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:20:28.458 08:12:59 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:20:28.458 Error setting digest 00:20:28.458 00C21662157F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:28.458 00C21662157F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:28.458 08:12:59 -- common/autotest_common.sh@643 -- # es=1 00:20:28.459 08:12:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:28.459 08:12:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:28.459 08:12:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:28.459 08:12:59 -- fips/fips.sh@131 -- # nvmftestinit 00:20:28.459 08:12:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:28.459 08:12:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.459 08:12:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:28.459 08:12:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:28.459 08:12:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:28.459 08:12:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.459 08:12:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:28.459 08:12:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.719 08:12:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:28.719 08:12:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:28.719 08:12:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:28.719 08:12:59 -- common/autotest_common.sh@10 -- # set +x 00:20:35.384 08:13:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:35.384 08:13:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:35.384 08:13:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:35.384 08:13:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:35.384 08:13:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:35.384 08:13:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:35.384 08:13:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:35.384 08:13:05 -- nvmf/common.sh@294 -- # net_devs=() 00:20:35.384 08:13:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:35.384 08:13:05 -- nvmf/common.sh@295 -- # e810=() 00:20:35.384 08:13:05 -- nvmf/common.sh@295 -- # local -ga e810 00:20:35.384 08:13:05 -- nvmf/common.sh@296 -- # x722=() 00:20:35.384 08:13:05 -- nvmf/common.sh@296 -- # local -ga x722 00:20:35.384 08:13:05 -- nvmf/common.sh@297 -- # mlx=() 00:20:35.384 08:13:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:35.384 08:13:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:35.384 08:13:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:35.384 08:13:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:35.384 08:13:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:35.384 08:13:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:35.384 08:13:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:35.384 08:13:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:35.384 08:13:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:35.384 08:13:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:35.384 08:13:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:35.384 08:13:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:35.384 08:13:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:35.384 08:13:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:35.384 08:13:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:35.384 08:13:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:35.384 08:13:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:35.384 08:13:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:35.384 08:13:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:35.384 08:13:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:35.384 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:35.384 08:13:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:35.384 08:13:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:35.384 08:13:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.384 08:13:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.384 08:13:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:35.384 08:13:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:35.384 08:13:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:35.384 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:35.384 08:13:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:35.384 08:13:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:35.384 08:13:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.384 08:13:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.384 08:13:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:35.384 08:13:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:35.384 08:13:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:35.384 08:13:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:35.384 08:13:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:35.384 08:13:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.384 08:13:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:35.384 08:13:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.384 08:13:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:35.384 Found net devices under 0000:31:00.0: cvl_0_0 00:20:35.384 08:13:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.384 08:13:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:35.384 08:13:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.384 08:13:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:35.384 08:13:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.384 08:13:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:35.384 Found net devices under 0000:31:00.1: cvl_0_1 00:20:35.384 08:13:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.384 08:13:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:35.384 08:13:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:35.384 08:13:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:35.384 08:13:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:35.384 08:13:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:35.384 08:13:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:35.384 08:13:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:35.384 08:13:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:35.384 08:13:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:35.384 08:13:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:35.384 08:13:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:35.384 08:13:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:35.384 08:13:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:35.384 08:13:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:35.384 08:13:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:35.384 08:13:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:35.384 08:13:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:35.384 08:13:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:35.646 08:13:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:35.646 08:13:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:35.646 08:13:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:35.646 08:13:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:35.646 08:13:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:35.646 08:13:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:35.646 08:13:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:35.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:35.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:20:35.646 00:20:35.646 --- 10.0.0.2 ping statistics --- 00:20:35.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.646 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:20:35.646 08:13:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:35.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:35.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:20:35.646 00:20:35.646 --- 10.0.0.1 ping statistics --- 00:20:35.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.646 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:20:35.646 08:13:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:35.646 08:13:06 -- nvmf/common.sh@410 -- # return 0 00:20:35.646 08:13:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:35.646 08:13:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:35.646 08:13:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:35.646 08:13:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:35.646 08:13:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:35.646 08:13:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:35.646 08:13:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:35.646 08:13:06 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:35.646 08:13:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:35.646 08:13:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:35.646 08:13:06 -- common/autotest_common.sh@10 -- # set +x 00:20:35.646 08:13:06 -- nvmf/common.sh@469 -- # nvmfpid=1089809 00:20:35.646 08:13:06 -- nvmf/common.sh@470 -- # waitforlisten 1089809 00:20:35.646 08:13:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:35.646 08:13:06 -- common/autotest_common.sh@819 -- # '[' -z 1089809 ']' 00:20:35.646 08:13:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.646 08:13:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:35.646 08:13:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.646 08:13:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:35.646 08:13:06 -- common/autotest_common.sh@10 -- # set +x 00:20:35.646 [2024-06-11 08:13:06.289885] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:35.646 [2024-06-11 08:13:06.289940] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.907 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.907 [2024-06-11 08:13:06.373295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.907 [2024-06-11 08:13:06.454067] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:35.907 [2024-06-11 08:13:06.454227] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.907 [2024-06-11 08:13:06.454237] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.907 [2024-06-11 08:13:06.454244] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.907 [2024-06-11 08:13:06.454278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.480 08:13:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:36.480 08:13:07 -- common/autotest_common.sh@852 -- # return 0 00:20:36.480 08:13:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:36.480 08:13:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:36.480 08:13:07 -- common/autotest_common.sh@10 -- # set +x 00:20:36.480 08:13:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.480 08:13:07 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:36.480 08:13:07 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:36.480 08:13:07 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:36.480 08:13:07 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:36.480 08:13:07 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:36.480 08:13:07 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:36.480 08:13:07 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:36.480 08:13:07 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:36.742 [2024-06-11 08:13:07.233718] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.742 [2024-06-11 08:13:07.249718] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:36.742 [2024-06-11 08:13:07.249992] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.742 malloc0 00:20:36.742 08:13:07 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:36.742 08:13:07 -- fips/fips.sh@148 -- # bdevperf_pid=1090169 00:20:36.742 08:13:07 -- fips/fips.sh@149 -- # waitforlisten 1090169 /var/tmp/bdevperf.sock 00:20:36.742 08:13:07 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:36.742 08:13:07 -- common/autotest_common.sh@819 -- # '[' -z 1090169 ']' 00:20:36.742 08:13:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.742 08:13:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:36.742 08:13:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.742 08:13:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:36.742 08:13:07 -- common/autotest_common.sh@10 -- # set +x 00:20:36.742 [2024-06-11 08:13:07.366917] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:36.742 [2024-06-11 08:13:07.366994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090169 ] 00:20:37.003 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.003 [2024-06-11 08:13:07.424591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.003 [2024-06-11 08:13:07.485598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.574 08:13:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:37.574 08:13:08 -- common/autotest_common.sh@852 -- # return 0 00:20:37.574 08:13:08 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:37.835 [2024-06-11 08:13:08.265542] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.835 TLSTESTn1 00:20:37.835 08:13:08 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:37.835 Running I/O for 10 seconds... 00:20:50.059 00:20:50.059 Latency(us) 00:20:50.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.059 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:50.059 Verification LBA range: start 0x0 length 0x2000 00:20:50.059 TLSTESTn1 : 10.01 6714.17 26.23 0.00 0.00 19044.44 3822.93 56797.87 00:20:50.059 =================================================================================================================== 00:20:50.059 Total : 6714.17 26.23 0.00 0.00 19044.44 3822.93 56797.87 00:20:50.059 0 00:20:50.059 08:13:18 -- fips/fips.sh@1 -- # cleanup 00:20:50.059 08:13:18 -- fips/fips.sh@15 -- # process_shm --id 0 00:20:50.059 08:13:18 -- common/autotest_common.sh@796 -- # type=--id 00:20:50.059 08:13:18 -- common/autotest_common.sh@797 -- # id=0 00:20:50.059 08:13:18 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:20:50.059 08:13:18 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:50.059 08:13:18 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:20:50.059 08:13:18 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:20:50.059 08:13:18 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:20:50.059 08:13:18 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:50.059 nvmf_trace.0 00:20:50.059 08:13:18 -- common/autotest_common.sh@811 -- # return 0 00:20:50.059 08:13:18 -- fips/fips.sh@16 -- # killprocess 1090169 00:20:50.059 08:13:18 -- common/autotest_common.sh@926 -- # '[' -z 1090169 ']' 00:20:50.059 08:13:18 -- common/autotest_common.sh@930 -- # kill -0 1090169 00:20:50.059 08:13:18 -- common/autotest_common.sh@931 -- # uname 00:20:50.059 08:13:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:50.059 08:13:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1090169 00:20:50.059 08:13:18 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:50.059 08:13:18 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:50.059 08:13:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1090169' 00:20:50.059 killing process with pid 1090169 00:20:50.059 08:13:18 -- common/autotest_common.sh@945 -- # kill 1090169 00:20:50.059 Received shutdown signal, test time was about 10.000000 seconds 00:20:50.059 00:20:50.059 Latency(us) 00:20:50.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.059 =================================================================================================================== 00:20:50.059 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.059 08:13:18 -- common/autotest_common.sh@950 -- # wait 1090169 00:20:50.059 08:13:18 -- fips/fips.sh@17 -- # nvmftestfini 00:20:50.059 08:13:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:50.059 08:13:18 -- nvmf/common.sh@116 -- # sync 00:20:50.059 08:13:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:50.059 08:13:18 -- nvmf/common.sh@119 -- # set +e 00:20:50.059 08:13:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:50.059 08:13:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:50.059 rmmod nvme_tcp 00:20:50.059 rmmod nvme_fabrics 00:20:50.060 rmmod nvme_keyring 00:20:50.060 08:13:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:50.060 08:13:18 -- nvmf/common.sh@123 -- # set -e 00:20:50.060 08:13:18 -- nvmf/common.sh@124 -- # return 0 00:20:50.060 08:13:18 -- nvmf/common.sh@477 -- # '[' -n 1089809 ']' 00:20:50.060 08:13:18 -- nvmf/common.sh@478 -- # killprocess 1089809 00:20:50.060 08:13:18 -- common/autotest_common.sh@926 -- # '[' -z 1089809 ']' 00:20:50.060 08:13:18 -- common/autotest_common.sh@930 -- # kill -0 1089809 00:20:50.060 08:13:18 -- common/autotest_common.sh@931 -- # uname 00:20:50.060 08:13:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:50.060 08:13:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1089809 00:20:50.060 08:13:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:50.060 08:13:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:50.060 08:13:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1089809' 00:20:50.060 killing process with pid 1089809 00:20:50.060 08:13:18 -- common/autotest_common.sh@945 -- # kill 1089809 00:20:50.060 08:13:18 -- common/autotest_common.sh@950 -- # wait 1089809 00:20:50.060 08:13:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:50.060 08:13:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:50.060 08:13:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:50.060 08:13:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:50.060 08:13:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:50.060 08:13:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.060 08:13:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:50.060 08:13:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.631 08:13:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:50.631 08:13:21 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:50.631 00:20:50.631 real 0m22.260s 00:20:50.631 user 0m23.407s 00:20:50.631 sys 0m9.432s 00:20:50.631 08:13:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:50.631 08:13:21 -- common/autotest_common.sh@10 -- # set +x 00:20:50.631 ************************************ 00:20:50.631 END TEST nvmf_fips 00:20:50.631 ************************************ 00:20:50.631 08:13:21 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:20:50.631 08:13:21 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:50.631 08:13:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:50.631 08:13:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:50.631 08:13:21 -- common/autotest_common.sh@10 -- # set +x 00:20:50.631 ************************************ 00:20:50.631 START TEST nvmf_fuzz 00:20:50.631 ************************************ 00:20:50.631 08:13:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:50.631 * Looking for test storage... 00:20:50.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:50.631 08:13:21 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:50.631 08:13:21 -- nvmf/common.sh@7 -- # uname -s 00:20:50.631 08:13:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.631 08:13:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.631 08:13:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.631 08:13:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.631 08:13:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.631 08:13:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.631 08:13:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.631 08:13:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.631 08:13:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.631 08:13:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.631 08:13:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:50.631 08:13:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:50.631 08:13:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.631 08:13:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.631 08:13:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:50.631 08:13:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:50.631 08:13:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.631 08:13:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.631 08:13:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.631 08:13:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.631 08:13:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.631 08:13:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.631 08:13:21 -- paths/export.sh@5 -- # export PATH 00:20:50.631 08:13:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.631 08:13:21 -- nvmf/common.sh@46 -- # : 0 00:20:50.631 08:13:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:50.631 08:13:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:50.631 08:13:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:50.631 08:13:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.631 08:13:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.631 08:13:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:50.631 08:13:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:50.631 08:13:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:50.631 08:13:21 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:20:50.631 08:13:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:50.631 08:13:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.631 08:13:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:50.631 08:13:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:50.631 08:13:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:50.631 08:13:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.631 08:13:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:50.631 08:13:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.631 08:13:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:50.631 08:13:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:50.631 08:13:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:50.631 08:13:21 -- common/autotest_common.sh@10 -- # set +x 00:20:58.771 08:13:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:58.771 08:13:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:58.771 08:13:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:58.771 08:13:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:58.771 08:13:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:58.771 08:13:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:58.771 08:13:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:58.771 08:13:28 -- nvmf/common.sh@294 -- # net_devs=() 00:20:58.771 08:13:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:58.771 08:13:28 -- nvmf/common.sh@295 -- # e810=() 00:20:58.771 08:13:28 -- nvmf/common.sh@295 -- # local -ga e810 00:20:58.771 08:13:28 -- nvmf/common.sh@296 -- # x722=() 00:20:58.771 08:13:28 -- nvmf/common.sh@296 -- # local -ga x722 00:20:58.771 08:13:28 -- nvmf/common.sh@297 -- # mlx=() 00:20:58.771 08:13:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:58.771 08:13:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:58.771 08:13:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:58.771 08:13:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:58.771 08:13:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:58.771 08:13:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:58.771 08:13:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:58.771 08:13:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:58.771 08:13:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:58.771 08:13:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:58.771 08:13:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:58.771 08:13:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:58.771 08:13:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:58.771 08:13:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:58.771 08:13:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:58.771 08:13:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:58.771 08:13:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:58.771 08:13:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:58.771 08:13:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:58.771 08:13:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:58.771 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:58.771 08:13:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:58.771 08:13:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:58.771 08:13:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.771 08:13:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.771 08:13:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:58.771 08:13:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:58.771 08:13:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:58.771 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:58.771 08:13:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:58.771 08:13:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:58.771 08:13:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.771 08:13:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.771 08:13:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:58.771 08:13:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:58.771 08:13:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:58.771 08:13:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:58.771 08:13:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:58.771 08:13:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.771 08:13:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:58.771 08:13:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.771 08:13:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:58.771 Found net devices under 0000:31:00.0: cvl_0_0 00:20:58.771 08:13:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.771 08:13:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:58.771 08:13:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.771 08:13:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:58.771 08:13:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.771 08:13:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:58.771 Found net devices under 0000:31:00.1: cvl_0_1 00:20:58.771 08:13:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.771 08:13:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:58.771 08:13:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:58.771 08:13:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:58.771 08:13:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:58.771 08:13:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:58.771 08:13:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.771 08:13:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.771 08:13:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:58.771 08:13:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:58.771 08:13:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:58.771 08:13:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:58.771 08:13:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:58.771 08:13:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:58.771 08:13:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.771 08:13:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:58.771 08:13:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:58.771 08:13:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:58.772 08:13:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:58.772 08:13:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:58.772 08:13:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:58.772 08:13:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:58.772 08:13:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:58.772 08:13:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:58.772 08:13:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:58.772 08:13:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:58.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:58.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:20:58.772 00:20:58.772 --- 10.0.0.2 ping statistics --- 00:20:58.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.772 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:20:58.772 08:13:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:58.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:58.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:20:58.772 00:20:58.772 --- 10.0.0.1 ping statistics --- 00:20:58.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.772 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:20:58.772 08:13:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:58.772 08:13:28 -- nvmf/common.sh@410 -- # return 0 00:20:58.772 08:13:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:58.772 08:13:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:58.772 08:13:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:58.772 08:13:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:58.772 08:13:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:58.772 08:13:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:58.772 08:13:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:58.772 08:13:28 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1096605 00:20:58.772 08:13:28 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:58.772 08:13:28 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:58.772 08:13:28 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1096605 00:20:58.772 08:13:28 -- common/autotest_common.sh@819 -- # '[' -z 1096605 ']' 00:20:58.772 08:13:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.772 08:13:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:58.772 08:13:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.772 08:13:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:58.772 08:13:28 -- common/autotest_common.sh@10 -- # set +x 00:20:58.772 08:13:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:58.772 08:13:29 -- common/autotest_common.sh@852 -- # return 0 00:20:58.772 08:13:29 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:58.772 08:13:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:58.772 08:13:29 -- common/autotest_common.sh@10 -- # set +x 00:20:58.772 08:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:58.772 08:13:29 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:20:58.772 08:13:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:58.772 08:13:29 -- common/autotest_common.sh@10 -- # set +x 00:20:58.772 Malloc0 00:20:58.772 08:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:58.772 08:13:29 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:58.772 08:13:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:58.772 08:13:29 -- common/autotest_common.sh@10 -- # set +x 00:20:58.772 08:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:58.772 08:13:29 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:58.772 08:13:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:58.772 08:13:29 -- common/autotest_common.sh@10 -- # set +x 00:20:58.772 08:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:58.772 08:13:29 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:58.772 08:13:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:58.772 08:13:29 -- common/autotest_common.sh@10 -- # set +x 00:20:58.772 08:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:58.772 08:13:29 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:20:58.772 08:13:29 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:21:30.878 Fuzzing completed. Shutting down the fuzz application 00:21:30.878 00:21:30.878 Dumping successful admin opcodes: 00:21:30.878 8, 9, 10, 24, 00:21:30.878 Dumping successful io opcodes: 00:21:30.878 0, 9, 00:21:30.878 NS: 0x200003aeff00 I/O qp, Total commands completed: 963198, total successful commands: 5634, random_seed: 2421405120 00:21:30.878 NS: 0x200003aeff00 admin qp, Total commands completed: 121218, total successful commands: 995, random_seed: 1562103424 00:21:30.878 08:13:59 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:30.878 Fuzzing completed. Shutting down the fuzz application 00:21:30.878 00:21:30.878 Dumping successful admin opcodes: 00:21:30.878 24, 00:21:30.878 Dumping successful io opcodes: 00:21:30.878 00:21:30.878 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3201016967 00:21:30.878 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3201096339 00:21:30.878 08:14:01 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:30.878 08:14:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:30.878 08:14:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.878 08:14:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:30.878 08:14:01 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:30.878 08:14:01 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:30.878 08:14:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:30.878 08:14:01 -- nvmf/common.sh@116 -- # sync 00:21:30.878 08:14:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:30.878 08:14:01 -- nvmf/common.sh@119 -- # set +e 00:21:30.878 08:14:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:30.878 08:14:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:30.878 rmmod nvme_tcp 00:21:30.878 rmmod nvme_fabrics 00:21:30.878 rmmod nvme_keyring 00:21:30.878 08:14:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:30.878 08:14:01 -- nvmf/common.sh@123 -- # set -e 00:21:30.878 08:14:01 -- nvmf/common.sh@124 -- # return 0 00:21:30.878 08:14:01 -- nvmf/common.sh@477 -- # '[' -n 1096605 ']' 00:21:30.878 08:14:01 -- nvmf/common.sh@478 -- # killprocess 1096605 00:21:30.878 08:14:01 -- common/autotest_common.sh@926 -- # '[' -z 1096605 ']' 00:21:30.878 08:14:01 -- common/autotest_common.sh@930 -- # kill -0 1096605 00:21:30.878 08:14:01 -- common/autotest_common.sh@931 -- # uname 00:21:30.878 08:14:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:30.878 08:14:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1096605 00:21:30.878 08:14:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:30.878 08:14:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:30.878 08:14:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1096605' 00:21:30.878 killing process with pid 1096605 00:21:30.878 08:14:01 -- common/autotest_common.sh@945 -- # kill 1096605 00:21:30.878 08:14:01 -- common/autotest_common.sh@950 -- # wait 1096605 00:21:30.878 08:14:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:30.878 08:14:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:30.878 08:14:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:30.878 08:14:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:30.878 08:14:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:30.878 08:14:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.878 08:14:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:30.878 08:14:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.790 08:14:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:32.790 08:14:03 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:33.050 00:21:33.050 real 0m42.338s 00:21:33.050 user 0m57.391s 00:21:33.050 sys 0m14.225s 00:21:33.050 08:14:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:33.050 08:14:03 -- common/autotest_common.sh@10 -- # set +x 00:21:33.050 ************************************ 00:21:33.050 END TEST nvmf_fuzz 00:21:33.050 ************************************ 00:21:33.050 08:14:03 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:33.050 08:14:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:33.050 08:14:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:33.050 08:14:03 -- common/autotest_common.sh@10 -- # set +x 00:21:33.050 ************************************ 00:21:33.050 START TEST nvmf_multiconnection 00:21:33.050 ************************************ 00:21:33.050 08:14:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:33.050 * Looking for test storage... 00:21:33.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:33.050 08:14:03 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:33.050 08:14:03 -- nvmf/common.sh@7 -- # uname -s 00:21:33.050 08:14:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.050 08:14:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.050 08:14:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.050 08:14:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.050 08:14:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.050 08:14:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.050 08:14:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.050 08:14:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.051 08:14:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.051 08:14:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.051 08:14:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:33.051 08:14:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:33.051 08:14:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.051 08:14:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.051 08:14:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:33.051 08:14:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:33.051 08:14:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.051 08:14:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.051 08:14:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.051 08:14:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.051 08:14:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.051 08:14:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.051 08:14:03 -- paths/export.sh@5 -- # export PATH 00:21:33.051 08:14:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.051 08:14:03 -- nvmf/common.sh@46 -- # : 0 00:21:33.051 08:14:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:33.051 08:14:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:33.051 08:14:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:33.051 08:14:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.051 08:14:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.051 08:14:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:33.051 08:14:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:33.051 08:14:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:33.051 08:14:03 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:33.051 08:14:03 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:33.051 08:14:03 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:33.051 08:14:03 -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:33.051 08:14:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:33.051 08:14:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.051 08:14:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:33.051 08:14:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:33.051 08:14:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:33.051 08:14:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.051 08:14:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.051 08:14:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.051 08:14:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:33.051 08:14:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:33.051 08:14:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:33.051 08:14:03 -- common/autotest_common.sh@10 -- # set +x 00:21:41.190 08:14:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:41.190 08:14:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:41.190 08:14:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:41.190 08:14:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:41.190 08:14:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:41.190 08:14:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:41.190 08:14:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:41.190 08:14:10 -- nvmf/common.sh@294 -- # net_devs=() 00:21:41.190 08:14:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:41.190 08:14:10 -- nvmf/common.sh@295 -- # e810=() 00:21:41.190 08:14:10 -- nvmf/common.sh@295 -- # local -ga e810 00:21:41.190 08:14:10 -- nvmf/common.sh@296 -- # x722=() 00:21:41.190 08:14:10 -- nvmf/common.sh@296 -- # local -ga x722 00:21:41.190 08:14:10 -- nvmf/common.sh@297 -- # mlx=() 00:21:41.190 08:14:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:41.190 08:14:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.190 08:14:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.190 08:14:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.190 08:14:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.190 08:14:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.190 08:14:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.190 08:14:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.190 08:14:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.190 08:14:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.190 08:14:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.190 08:14:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.190 08:14:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:41.190 08:14:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:41.190 08:14:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:41.190 08:14:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:41.190 08:14:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:41.190 08:14:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:41.190 08:14:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:41.190 08:14:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:41.190 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:41.190 08:14:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:41.190 08:14:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:41.190 08:14:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.190 08:14:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.190 08:14:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:41.190 08:14:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:41.190 08:14:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:41.190 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:41.190 08:14:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:41.190 08:14:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:41.190 08:14:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.190 08:14:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.190 08:14:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:41.190 08:14:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:41.190 08:14:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:41.190 08:14:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:41.190 08:14:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:41.190 08:14:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.190 08:14:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:41.190 08:14:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.190 08:14:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:41.190 Found net devices under 0000:31:00.0: cvl_0_0 00:21:41.190 08:14:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.190 08:14:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:41.190 08:14:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.190 08:14:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:41.190 08:14:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.190 08:14:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:41.190 Found net devices under 0000:31:00.1: cvl_0_1 00:21:41.190 08:14:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.190 08:14:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:41.190 08:14:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:41.190 08:14:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:41.190 08:14:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:41.190 08:14:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:41.190 08:14:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.190 08:14:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.190 08:14:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.190 08:14:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:41.190 08:14:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.190 08:14:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.190 08:14:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:41.190 08:14:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.190 08:14:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.190 08:14:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:41.190 08:14:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:41.190 08:14:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.190 08:14:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.190 08:14:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.190 08:14:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.190 08:14:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:41.190 08:14:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:41.190 08:14:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:41.190 08:14:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.190 08:14:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:41.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:21:41.190 00:21:41.190 --- 10.0.0.2 ping statistics --- 00:21:41.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.190 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:21:41.190 08:14:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:21:41.190 00:21:41.191 --- 10.0.0.1 ping statistics --- 00:21:41.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.191 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:21:41.191 08:14:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.191 08:14:10 -- nvmf/common.sh@410 -- # return 0 00:21:41.191 08:14:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:41.191 08:14:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.191 08:14:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:41.191 08:14:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:41.191 08:14:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.191 08:14:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:41.191 08:14:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:41.191 08:14:10 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:41.191 08:14:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:41.191 08:14:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:41.191 08:14:10 -- common/autotest_common.sh@10 -- # set +x 00:21:41.191 08:14:10 -- nvmf/common.sh@469 -- # nvmfpid=1107125 00:21:41.191 08:14:10 -- nvmf/common.sh@470 -- # waitforlisten 1107125 00:21:41.191 08:14:10 -- common/autotest_common.sh@819 -- # '[' -z 1107125 ']' 00:21:41.191 08:14:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.191 08:14:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:41.191 08:14:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.191 08:14:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:41.191 08:14:10 -- common/autotest_common.sh@10 -- # set +x 00:21:41.191 08:14:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:41.191 [2024-06-11 08:14:11.003953] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:41.191 [2024-06-11 08:14:11.004017] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.191 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.191 [2024-06-11 08:14:11.076306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:41.191 [2024-06-11 08:14:11.151690] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:41.191 [2024-06-11 08:14:11.151827] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.191 [2024-06-11 08:14:11.151836] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.191 [2024-06-11 08:14:11.151844] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.191 [2024-06-11 08:14:11.151989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.191 [2024-06-11 08:14:11.152096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.191 [2024-06-11 08:14:11.152265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.191 [2024-06-11 08:14:11.152266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.191 08:14:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:41.191 08:14:11 -- common/autotest_common.sh@852 -- # return 0 00:21:41.191 08:14:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:41.191 08:14:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:41.191 08:14:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.191 08:14:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.191 08:14:11 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:41.191 08:14:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.191 08:14:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.191 [2024-06-11 08:14:11.815532] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.191 08:14:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.191 08:14:11 -- target/multiconnection.sh@21 -- # seq 1 11 00:21:41.191 08:14:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:41.191 08:14:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:41.191 08:14:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.191 08:14:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.452 Malloc1 00:21:41.452 08:14:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.452 08:14:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:41.452 08:14:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.452 08:14:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.452 08:14:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.452 08:14:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:41.452 08:14:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.452 08:14:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.452 08:14:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.452 08:14:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:41.452 08:14:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.452 08:14:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.452 [2024-06-11 08:14:11.882910] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.452 08:14:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.452 08:14:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:41.452 08:14:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:41.452 08:14:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.452 08:14:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.452 Malloc2 00:21:41.452 08:14:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.452 08:14:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:41.452 08:14:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.452 08:14:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.452 08:14:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.452 08:14:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:41.452 08:14:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.452 08:14:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.452 08:14:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.452 08:14:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:41.452 08:14:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.452 08:14:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.452 08:14:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.452 08:14:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:41.452 08:14:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:41.452 08:14:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.452 08:14:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.452 Malloc3 00:21:41.452 08:14:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.452 08:14:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:41.452 08:14:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.452 08:14:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.452 08:14:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.452 08:14:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:41.452 08:14:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.452 08:14:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.452 08:14:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.452 08:14:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:41.452 08:14:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.452 08:14:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.452 08:14:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.452 08:14:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:41.452 08:14:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:41.452 08:14:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.452 08:14:11 -- common/autotest_common.sh@10 -- # set +x 00:21:41.452 Malloc4 00:21:41.452 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.452 08:14:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:41.452 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.452 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.452 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.452 08:14:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:41.452 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.452 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.452 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.452 08:14:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:21:41.452 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.452 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.452 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.452 08:14:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:41.452 08:14:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:41.452 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.452 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.452 Malloc5 00:21:41.452 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.452 08:14:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:41.452 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.452 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.452 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.452 08:14:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:41.452 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.452 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.452 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.452 08:14:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:21:41.453 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.453 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.453 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.453 08:14:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:41.453 08:14:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:41.453 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.453 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.712 Malloc6 00:21:41.712 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.712 08:14:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:41.712 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.712 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.712 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.712 08:14:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:41.712 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.712 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.712 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.712 08:14:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:21:41.712 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.712 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.712 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.712 08:14:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:41.712 08:14:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:41.712 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.712 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.712 Malloc7 00:21:41.712 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.712 08:14:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:41.712 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.712 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.712 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.712 08:14:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:41.712 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.712 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.712 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.712 08:14:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:21:41.712 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.712 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.712 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.712 08:14:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:41.712 08:14:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:41.712 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.712 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.713 Malloc8 00:21:41.713 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.713 08:14:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:41.713 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.713 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.713 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.713 08:14:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:41.713 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.713 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.713 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.713 08:14:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:21:41.713 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.713 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.713 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.713 08:14:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:41.713 08:14:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:41.713 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.713 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.713 Malloc9 00:21:41.713 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.713 08:14:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:41.713 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.713 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.713 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.713 08:14:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:41.713 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.713 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.713 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.713 08:14:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:21:41.713 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.713 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.713 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.713 08:14:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:41.713 08:14:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:41.713 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.713 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.713 Malloc10 00:21:41.713 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.713 08:14:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:41.713 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.713 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.713 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.713 08:14:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:41.713 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.713 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.713 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.713 08:14:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:21:41.713 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.713 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.713 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.713 08:14:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:41.713 08:14:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:41.713 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.713 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.973 Malloc11 00:21:41.973 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.973 08:14:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:41.973 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.973 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.973 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.973 08:14:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:41.973 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.973 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.973 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.973 08:14:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:21:41.973 08:14:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.973 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:41.973 08:14:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.973 08:14:12 -- target/multiconnection.sh@28 -- # seq 1 11 00:21:41.973 08:14:12 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:41.973 08:14:12 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:43.359 08:14:13 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:43.359 08:14:13 -- common/autotest_common.sh@1177 -- # local i=0 00:21:43.359 08:14:13 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:43.359 08:14:13 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:43.359 08:14:13 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:45.273 08:14:15 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:45.273 08:14:15 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:45.273 08:14:15 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:21:45.273 08:14:15 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:45.273 08:14:15 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:45.273 08:14:15 -- common/autotest_common.sh@1187 -- # return 0 00:21:45.273 08:14:15 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:45.273 08:14:15 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:21:47.189 08:14:17 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:21:47.189 08:14:17 -- common/autotest_common.sh@1177 -- # local i=0 00:21:47.189 08:14:17 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:47.189 08:14:17 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:47.189 08:14:17 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:49.100 08:14:19 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:49.100 08:14:19 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:49.100 08:14:19 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:21:49.100 08:14:19 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:49.100 08:14:19 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:49.100 08:14:19 -- common/autotest_common.sh@1187 -- # return 0 00:21:49.100 08:14:19 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.100 08:14:19 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:21:50.486 08:14:20 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:21:50.486 08:14:20 -- common/autotest_common.sh@1177 -- # local i=0 00:21:50.486 08:14:20 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:50.486 08:14:20 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:50.486 08:14:20 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:52.399 08:14:22 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:52.399 08:14:22 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:52.399 08:14:22 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:21:52.399 08:14:22 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:52.399 08:14:22 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:52.399 08:14:22 -- common/autotest_common.sh@1187 -- # return 0 00:21:52.399 08:14:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:52.399 08:14:22 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:21:54.353 08:14:24 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:54.353 08:14:24 -- common/autotest_common.sh@1177 -- # local i=0 00:21:54.353 08:14:24 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:54.353 08:14:24 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:54.353 08:14:24 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:56.318 08:14:26 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:56.319 08:14:26 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:56.319 08:14:26 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:21:56.319 08:14:26 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:56.319 08:14:26 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:56.319 08:14:26 -- common/autotest_common.sh@1187 -- # return 0 00:21:56.319 08:14:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:56.319 08:14:26 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:21:57.700 08:14:28 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:21:57.700 08:14:28 -- common/autotest_common.sh@1177 -- # local i=0 00:21:57.700 08:14:28 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:57.700 08:14:28 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:57.700 08:14:28 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:59.607 08:14:30 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:59.607 08:14:30 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:59.607 08:14:30 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:21:59.865 08:14:30 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:59.865 08:14:30 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:59.865 08:14:30 -- common/autotest_common.sh@1187 -- # return 0 00:21:59.865 08:14:30 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.865 08:14:30 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:22:01.769 08:14:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:01.769 08:14:31 -- common/autotest_common.sh@1177 -- # local i=0 00:22:01.769 08:14:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:01.769 08:14:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:01.769 08:14:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:03.674 08:14:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:03.674 08:14:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:03.674 08:14:33 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:22:03.674 08:14:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:03.674 08:14:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:03.674 08:14:33 -- common/autotest_common.sh@1187 -- # return 0 00:22:03.674 08:14:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:03.674 08:14:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:22:05.053 08:14:35 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:05.053 08:14:35 -- common/autotest_common.sh@1177 -- # local i=0 00:22:05.053 08:14:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:05.053 08:14:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:05.053 08:14:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:06.962 08:14:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:07.222 08:14:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:07.222 08:14:37 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:22:07.222 08:14:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:07.222 08:14:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:07.222 08:14:37 -- common/autotest_common.sh@1187 -- # return 0 00:22:07.222 08:14:37 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:07.222 08:14:37 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:09.133 08:14:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:09.133 08:14:39 -- common/autotest_common.sh@1177 -- # local i=0 00:22:09.133 08:14:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:09.133 08:14:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:09.133 08:14:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:11.046 08:14:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:11.046 08:14:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:11.046 08:14:41 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:22:11.046 08:14:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:11.046 08:14:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:11.046 08:14:41 -- common/autotest_common.sh@1187 -- # return 0 00:22:11.046 08:14:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:11.046 08:14:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:22:12.426 08:14:43 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:12.426 08:14:43 -- common/autotest_common.sh@1177 -- # local i=0 00:22:12.426 08:14:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:12.426 08:14:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:12.426 08:14:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:14.970 08:14:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:14.970 08:14:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:14.970 08:14:45 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:22:14.970 08:14:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:14.970 08:14:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:14.970 08:14:45 -- common/autotest_common.sh@1187 -- # return 0 00:22:14.970 08:14:45 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:14.970 08:14:45 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:22:16.354 08:14:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:16.354 08:14:46 -- common/autotest_common.sh@1177 -- # local i=0 00:22:16.354 08:14:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:16.354 08:14:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:16.354 08:14:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:18.266 08:14:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:18.266 08:14:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:18.266 08:14:48 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:22:18.266 08:14:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:18.266 08:14:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:18.266 08:14:48 -- common/autotest_common.sh@1187 -- # return 0 00:22:18.266 08:14:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:18.266 08:14:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:22:20.179 08:14:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:20.179 08:14:50 -- common/autotest_common.sh@1177 -- # local i=0 00:22:20.179 08:14:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:20.179 08:14:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:20.179 08:14:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:22.088 08:14:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:22.088 08:14:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:22.088 08:14:52 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:22:22.088 08:14:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:22.088 08:14:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:22.088 08:14:52 -- common/autotest_common.sh@1187 -- # return 0 00:22:22.088 08:14:52 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:22.088 [global] 00:22:22.088 thread=1 00:22:22.089 invalidate=1 00:22:22.089 rw=read 00:22:22.089 time_based=1 00:22:22.089 runtime=10 00:22:22.089 ioengine=libaio 00:22:22.089 direct=1 00:22:22.089 bs=262144 00:22:22.089 iodepth=64 00:22:22.089 norandommap=1 00:22:22.089 numjobs=1 00:22:22.089 00:22:22.089 [job0] 00:22:22.089 filename=/dev/nvme0n1 00:22:22.089 [job1] 00:22:22.089 filename=/dev/nvme10n1 00:22:22.089 [job2] 00:22:22.089 filename=/dev/nvme1n1 00:22:22.089 [job3] 00:22:22.089 filename=/dev/nvme2n1 00:22:22.089 [job4] 00:22:22.089 filename=/dev/nvme3n1 00:22:22.089 [job5] 00:22:22.089 filename=/dev/nvme4n1 00:22:22.089 [job6] 00:22:22.089 filename=/dev/nvme5n1 00:22:22.089 [job7] 00:22:22.089 filename=/dev/nvme6n1 00:22:22.089 [job8] 00:22:22.089 filename=/dev/nvme7n1 00:22:22.089 [job9] 00:22:22.089 filename=/dev/nvme8n1 00:22:22.089 [job10] 00:22:22.089 filename=/dev/nvme9n1 00:22:22.373 Could not set queue depth (nvme0n1) 00:22:22.373 Could not set queue depth (nvme10n1) 00:22:22.373 Could not set queue depth (nvme1n1) 00:22:22.373 Could not set queue depth (nvme2n1) 00:22:22.373 Could not set queue depth (nvme3n1) 00:22:22.373 Could not set queue depth (nvme4n1) 00:22:22.373 Could not set queue depth (nvme5n1) 00:22:22.373 Could not set queue depth (nvme6n1) 00:22:22.373 Could not set queue depth (nvme7n1) 00:22:22.373 Could not set queue depth (nvme8n1) 00:22:22.373 Could not set queue depth (nvme9n1) 00:22:22.634 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:22.634 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:22.634 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:22.634 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:22.634 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:22.634 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:22.634 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:22.634 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:22.634 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:22.634 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:22.634 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:22.634 fio-3.35 00:22:22.634 Starting 11 threads 00:22:34.864 00:22:34.864 job0: (groupid=0, jobs=1): err= 0: pid=1115791: Tue Jun 11 08:15:03 2024 00:22:34.864 read: IOPS=904, BW=226MiB/s (237MB/s)(2279MiB/10074msec) 00:22:34.864 slat (usec): min=5, max=103981, avg=944.82, stdev=3522.33 00:22:34.864 clat (msec): min=3, max=242, avg=69.70, stdev=29.74 00:22:34.864 lat (msec): min=3, max=242, avg=70.64, stdev=30.31 00:22:34.864 clat percentiles (msec): 00:22:34.864 | 1.00th=[ 10], 5.00th=[ 21], 10.00th=[ 32], 20.00th=[ 45], 00:22:34.864 | 30.00th=[ 53], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 75], 00:22:34.864 | 70.00th=[ 82], 80.00th=[ 94], 90.00th=[ 112], 95.00th=[ 123], 00:22:34.864 | 99.00th=[ 140], 99.50th=[ 146], 99.90th=[ 153], 99.95th=[ 157], 00:22:34.864 | 99.99th=[ 243] 00:22:34.864 bw ( KiB/s): min=145920, max=328558, per=8.70%, avg=231698.30, stdev=63110.59, samples=20 00:22:34.864 iops : min= 570, max= 1283, avg=905.05, stdev=246.49, samples=20 00:22:34.864 lat (msec) : 4=0.03%, 10=1.26%, 20=3.53%, 50=22.17%, 100=55.41% 00:22:34.864 lat (msec) : 250=17.60% 00:22:34.864 cpu : usr=0.22%, sys=2.86%, ctx=2225, majf=0, minf=3534 00:22:34.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:34.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:34.864 issued rwts: total=9116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.864 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:34.864 job1: (groupid=0, jobs=1): err= 0: pid=1115809: Tue Jun 11 08:15:03 2024 00:22:34.864 read: IOPS=591, BW=148MiB/s (155MB/s)(1489MiB/10066msec) 00:22:34.864 slat (usec): min=5, max=74491, avg=1498.78, stdev=4455.21 00:22:34.864 clat (msec): min=5, max=203, avg=106.55, stdev=28.26 00:22:34.864 lat (msec): min=5, max=203, avg=108.05, stdev=28.90 00:22:34.864 clat percentiles (msec): 00:22:34.864 | 1.00th=[ 12], 5.00th=[ 45], 10.00th=[ 77], 20.00th=[ 92], 00:22:34.864 | 30.00th=[ 102], 40.00th=[ 106], 50.00th=[ 111], 60.00th=[ 117], 00:22:34.864 | 70.00th=[ 124], 80.00th=[ 128], 90.00th=[ 134], 95.00th=[ 140], 00:22:34.864 | 99.00th=[ 155], 99.50th=[ 169], 99.90th=[ 197], 99.95th=[ 197], 00:22:34.864 | 99.99th=[ 205] 00:22:34.864 bw ( KiB/s): min=120832, max=219648, per=5.68%, avg=151182.89, stdev=28321.34, samples=19 00:22:34.864 iops : min= 472, max= 858, avg=590.53, stdev=110.60, samples=19 00:22:34.864 lat (msec) : 10=0.45%, 20=1.65%, 50=4.63%, 100=21.81%, 250=71.46% 00:22:34.864 cpu : usr=0.23%, sys=1.68%, ctx=1501, majf=0, minf=4097 00:22:34.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:22:34.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:34.864 issued rwts: total=5956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.864 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:34.864 job2: (groupid=0, jobs=1): err= 0: pid=1115819: Tue Jun 11 08:15:03 2024 00:22:34.864 read: IOPS=855, BW=214MiB/s (224MB/s)(2142MiB/10015msec) 00:22:34.864 slat (usec): min=6, max=107499, avg=1020.25, stdev=3931.34 00:22:34.864 clat (msec): min=2, max=225, avg=73.72, stdev=39.80 00:22:34.864 lat (msec): min=2, max=262, avg=74.74, stdev=40.49 00:22:34.864 clat percentiles (msec): 00:22:34.864 | 1.00th=[ 11], 5.00th=[ 23], 10.00th=[ 29], 20.00th=[ 36], 00:22:34.864 | 30.00th=[ 43], 40.00th=[ 48], 50.00th=[ 60], 60.00th=[ 90], 00:22:34.864 | 70.00th=[ 106], 80.00th=[ 120], 90.00th=[ 128], 95.00th=[ 133], 00:22:34.864 | 99.00th=[ 148], 99.50th=[ 157], 99.90th=[ 182], 99.95th=[ 182], 00:22:34.864 | 99.99th=[ 226] 00:22:34.864 bw ( KiB/s): min=123392, max=421376, per=8.18%, avg=217777.65, stdev=94760.79, samples=20 00:22:34.864 iops : min= 482, max= 1646, avg=850.65, stdev=370.15, samples=20 00:22:34.864 lat (msec) : 4=0.07%, 10=0.91%, 20=3.07%, 50=38.76%, 100=21.66% 00:22:34.864 lat (msec) : 250=35.54% 00:22:34.864 cpu : usr=0.32%, sys=2.58%, ctx=2125, majf=0, minf=4097 00:22:34.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:34.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:34.865 issued rwts: total=8569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.865 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:34.865 job3: (groupid=0, jobs=1): err= 0: pid=1115830: Tue Jun 11 08:15:03 2024 00:22:34.865 read: IOPS=647, BW=162MiB/s (170MB/s)(1632MiB/10078msec) 00:22:34.865 slat (usec): min=8, max=64178, avg=1432.96, stdev=4194.38 00:22:34.865 clat (msec): min=25, max=230, avg=97.25, stdev=24.59 00:22:34.865 lat (msec): min=27, max=230, avg=98.68, stdev=25.13 00:22:34.865 clat percentiles (msec): 00:22:34.865 | 1.00th=[ 53], 5.00th=[ 65], 10.00th=[ 70], 20.00th=[ 75], 00:22:34.865 | 30.00th=[ 80], 40.00th=[ 85], 50.00th=[ 95], 60.00th=[ 104], 00:22:34.865 | 70.00th=[ 111], 80.00th=[ 124], 90.00th=[ 130], 95.00th=[ 136], 00:22:34.865 | 99.00th=[ 150], 99.50th=[ 178], 99.90th=[ 197], 99.95th=[ 199], 00:22:34.865 | 99.99th=[ 232] 00:22:34.865 bw ( KiB/s): min=119808, max=230912, per=6.22%, avg=165544.90, stdev=33458.72, samples=20 00:22:34.865 iops : min= 468, max= 902, avg=646.65, stdev=130.70, samples=20 00:22:34.865 lat (msec) : 50=0.67%, 100=55.00%, 250=44.33% 00:22:34.865 cpu : usr=0.27%, sys=1.97%, ctx=1607, majf=0, minf=4097 00:22:34.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:22:34.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:34.865 issued rwts: total=6529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.865 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:34.865 job4: (groupid=0, jobs=1): err= 0: pid=1115837: Tue Jun 11 08:15:03 2024 00:22:34.865 read: IOPS=807, BW=202MiB/s (212MB/s)(2033MiB/10072msec) 00:22:34.865 slat (usec): min=6, max=69089, avg=1090.32, stdev=3541.48 00:22:34.865 clat (msec): min=3, max=190, avg=78.07, stdev=23.86 00:22:34.865 lat (msec): min=3, max=199, avg=79.16, stdev=24.33 00:22:34.865 clat percentiles (msec): 00:22:34.865 | 1.00th=[ 15], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 63], 00:22:34.865 | 30.00th=[ 71], 40.00th=[ 77], 50.00th=[ 79], 60.00th=[ 82], 00:22:34.865 | 70.00th=[ 86], 80.00th=[ 94], 90.00th=[ 108], 95.00th=[ 120], 00:22:34.865 | 99.00th=[ 136], 99.50th=[ 148], 99.90th=[ 161], 99.95th=[ 163], 00:22:34.865 | 99.99th=[ 190] 00:22:34.865 bw ( KiB/s): min=134412, max=330240, per=7.76%, avg=206579.80, stdev=49037.78, samples=20 00:22:34.865 iops : min= 525, max= 1290, avg=806.95, stdev=191.56, samples=20 00:22:34.865 lat (msec) : 4=0.04%, 10=0.68%, 20=1.29%, 50=9.43%, 100=72.81% 00:22:34.865 lat (msec) : 250=15.75% 00:22:34.865 cpu : usr=0.40%, sys=2.30%, ctx=1959, majf=0, minf=4097 00:22:34.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:34.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:34.865 issued rwts: total=8132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.865 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:34.865 job5: (groupid=0, jobs=1): err= 0: pid=1115863: Tue Jun 11 08:15:03 2024 00:22:34.865 read: IOPS=823, BW=206MiB/s (216MB/s)(2072MiB/10068msec) 00:22:34.865 slat (usec): min=8, max=33490, avg=1173.50, stdev=2947.21 00:22:34.865 clat (msec): min=3, max=145, avg=76.49, stdev=19.35 00:22:34.865 lat (msec): min=3, max=145, avg=77.66, stdev=19.66 00:22:34.865 clat percentiles (msec): 00:22:34.865 | 1.00th=[ 22], 5.00th=[ 41], 10.00th=[ 54], 20.00th=[ 63], 00:22:34.865 | 30.00th=[ 71], 40.00th=[ 75], 50.00th=[ 79], 60.00th=[ 82], 00:22:34.865 | 70.00th=[ 85], 80.00th=[ 90], 90.00th=[ 99], 95.00th=[ 111], 00:22:34.865 | 99.00th=[ 120], 99.50th=[ 127], 99.90th=[ 138], 99.95th=[ 140], 00:22:34.865 | 99.99th=[ 146] 00:22:34.865 bw ( KiB/s): min=144384, max=321024, per=7.91%, avg=210585.60, stdev=45966.71, samples=20 00:22:34.865 iops : min= 564, max= 1254, avg=822.60, stdev=179.56, samples=20 00:22:34.865 lat (msec) : 4=0.01%, 10=0.14%, 20=0.52%, 50=7.96%, 100=82.21% 00:22:34.865 lat (msec) : 250=9.16% 00:22:34.865 cpu : usr=0.38%, sys=2.86%, ctx=1778, majf=0, minf=4097 00:22:34.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:34.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:34.865 issued rwts: total=8289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.865 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:34.865 job6: (groupid=0, jobs=1): err= 0: pid=1115875: Tue Jun 11 08:15:03 2024 00:22:34.865 read: IOPS=652, BW=163MiB/s (171MB/s)(1645MiB/10081msec) 00:22:34.865 slat (usec): min=8, max=70964, avg=1498.83, stdev=4365.88 00:22:34.865 clat (msec): min=14, max=209, avg=96.41, stdev=31.59 00:22:34.865 lat (msec): min=16, max=209, avg=97.91, stdev=32.23 00:22:34.865 clat percentiles (msec): 00:22:34.865 | 1.00th=[ 29], 5.00th=[ 40], 10.00th=[ 44], 20.00th=[ 68], 00:22:34.865 | 30.00th=[ 81], 40.00th=[ 99], 50.00th=[ 104], 60.00th=[ 110], 00:22:34.865 | 70.00th=[ 118], 80.00th=[ 124], 90.00th=[ 129], 95.00th=[ 136], 00:22:34.865 | 99.00th=[ 155], 99.50th=[ 167], 99.90th=[ 203], 99.95th=[ 203], 00:22:34.865 | 99.99th=[ 209] 00:22:34.865 bw ( KiB/s): min=119296, max=386048, per=6.27%, avg=166849.20, stdev=62306.87, samples=20 00:22:34.865 iops : min= 466, max= 1508, avg=651.75, stdev=243.39, samples=20 00:22:34.865 lat (msec) : 20=0.18%, 50=13.28%, 100=29.73%, 250=56.81% 00:22:34.865 cpu : usr=0.26%, sys=2.72%, ctx=1571, majf=0, minf=4097 00:22:34.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:22:34.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:34.865 issued rwts: total=6580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.865 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:34.865 job7: (groupid=0, jobs=1): err= 0: pid=1115885: Tue Jun 11 08:15:03 2024 00:22:34.865 read: IOPS=907, BW=227MiB/s (238MB/s)(2286MiB/10070msec) 00:22:34.865 slat (usec): min=7, max=32664, avg=1076.21, stdev=2792.00 00:22:34.865 clat (msec): min=8, max=141, avg=69.33, stdev=26.78 00:22:34.865 lat (msec): min=8, max=141, avg=70.41, stdev=27.20 00:22:34.865 clat percentiles (msec): 00:22:34.865 | 1.00th=[ 23], 5.00th=[ 26], 10.00th=[ 28], 20.00th=[ 31], 00:22:34.865 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 78], 60.00th=[ 81], 00:22:34.865 | 70.00th=[ 85], 80.00th=[ 90], 90.00th=[ 100], 95.00th=[ 110], 00:22:34.865 | 99.00th=[ 122], 99.50th=[ 126], 99.90th=[ 136], 99.95th=[ 142], 00:22:34.865 | 99.99th=[ 142] 00:22:34.865 bw ( KiB/s): min=137216, max=565248, per=8.73%, avg=232396.80, stdev=109870.79, samples=20 00:22:34.865 iops : min= 536, max= 2208, avg=907.80, stdev=429.18, samples=20 00:22:34.865 lat (msec) : 10=0.03%, 20=0.38%, 50=24.31%, 100=65.95%, 250=9.33% 00:22:34.865 cpu : usr=0.32%, sys=3.06%, ctx=1920, majf=0, minf=4097 00:22:34.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:34.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:34.865 issued rwts: total=9142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.865 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:34.865 job8: (groupid=0, jobs=1): err= 0: pid=1115914: Tue Jun 11 08:15:03 2024 00:22:34.865 read: IOPS=1606, BW=402MiB/s (421MB/s)(4048MiB/10081msec) 00:22:34.865 slat (usec): min=5, max=31853, avg=611.66, stdev=1658.53 00:22:34.865 clat (msec): min=19, max=206, avg=39.18, stdev=21.57 00:22:34.865 lat (msec): min=20, max=208, avg=39.80, stdev=21.89 00:22:34.865 clat percentiles (msec): 00:22:34.865 | 1.00th=[ 23], 5.00th=[ 25], 10.00th=[ 26], 20.00th=[ 27], 00:22:34.865 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 29], 60.00th=[ 33], 00:22:34.865 | 70.00th=[ 41], 80.00th=[ 48], 90.00th=[ 65], 95.00th=[ 96], 00:22:34.865 | 99.00th=[ 117], 99.50th=[ 125], 99.90th=[ 165], 99.95th=[ 201], 00:22:34.865 | 99.99th=[ 207] 00:22:34.865 bw ( KiB/s): min=145408, max=610816, per=16.03%, avg=426971.63, stdev=149276.48, samples=19 00:22:34.865 iops : min= 568, max= 2386, avg=1667.84, stdev=583.12, samples=19 00:22:34.865 lat (msec) : 20=0.01%, 50=81.83%, 100=14.10%, 250=4.06% 00:22:34.865 cpu : usr=0.48%, sys=4.23%, ctx=3188, majf=0, minf=4097 00:22:34.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:34.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:34.865 issued rwts: total=16193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.865 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:34.865 job9: (groupid=0, jobs=1): err= 0: pid=1115927: Tue Jun 11 08:15:03 2024 00:22:34.865 read: IOPS=774, BW=194MiB/s (203MB/s)(1952MiB/10085msec) 00:22:34.865 slat (usec): min=5, max=78667, avg=1082.77, stdev=3936.76 00:22:34.865 clat (msec): min=3, max=203, avg=81.50, stdev=38.36 00:22:34.865 lat (msec): min=3, max=263, avg=82.58, stdev=39.07 00:22:34.865 clat percentiles (msec): 00:22:34.865 | 1.00th=[ 11], 5.00th=[ 24], 10.00th=[ 31], 20.00th=[ 44], 00:22:34.865 | 30.00th=[ 53], 40.00th=[ 69], 50.00th=[ 81], 60.00th=[ 99], 00:22:34.865 | 70.00th=[ 110], 80.00th=[ 123], 90.00th=[ 130], 95.00th=[ 136], 00:22:34.865 | 99.00th=[ 153], 99.50th=[ 163], 99.90th=[ 192], 99.95th=[ 197], 00:22:34.865 | 99.99th=[ 203] 00:22:34.865 bw ( KiB/s): min=117248, max=333824, per=7.44%, avg=198211.15, stdev=70705.91, samples=20 00:22:34.865 iops : min= 458, max= 1304, avg=774.25, stdev=276.20, samples=20 00:22:34.865 lat (msec) : 4=0.03%, 10=0.94%, 20=2.38%, 50=24.15%, 100=34.23% 00:22:34.865 lat (msec) : 250=38.28% 00:22:34.865 cpu : usr=0.38%, sys=2.35%, ctx=2016, majf=0, minf=4097 00:22:34.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:34.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:34.865 issued rwts: total=7806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.865 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:34.866 job10: (groupid=0, jobs=1): err= 0: pid=1115937: Tue Jun 11 08:15:03 2024 00:22:34.866 read: IOPS=1857, BW=464MiB/s (487MB/s)(4649MiB/10013msec) 00:22:34.866 slat (usec): min=6, max=27158, avg=524.55, stdev=1299.30 00:22:34.866 clat (msec): min=3, max=108, avg=33.90, stdev=12.21 00:22:34.866 lat (msec): min=3, max=108, avg=34.42, stdev=12.36 00:22:34.866 clat percentiles (msec): 00:22:34.866 | 1.00th=[ 22], 5.00th=[ 25], 10.00th=[ 26], 20.00th=[ 28], 00:22:34.866 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:22:34.866 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 53], 95.00th=[ 63], 00:22:34.866 | 99.00th=[ 82], 99.50th=[ 90], 99.90th=[ 102], 99.95th=[ 106], 00:22:34.866 | 99.99th=[ 106] 00:22:34.866 bw ( KiB/s): min=245248, max=591360, per=17.82%, avg=474444.80, stdev=113562.88, samples=20 00:22:34.866 iops : min= 958, max= 2310, avg=1853.30, stdev=443.60, samples=20 00:22:34.866 lat (msec) : 4=0.01%, 10=0.17%, 20=0.55%, 50=88.05%, 100=11.09% 00:22:34.866 lat (msec) : 250=0.12% 00:22:34.866 cpu : usr=0.43%, sys=5.61%, ctx=3603, majf=0, minf=4097 00:22:34.866 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:22:34.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:34.866 issued rwts: total=18596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.866 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:34.866 00:22:34.866 Run status group 0 (all jobs): 00:22:34.866 READ: bw=2601MiB/s (2727MB/s), 148MiB/s-464MiB/s (155MB/s-487MB/s), io=25.6GiB (27.5GB), run=10013-10085msec 00:22:34.866 00:22:34.866 Disk stats (read/write): 00:22:34.866 nvme0n1: ios=17617/0, merge=0/0, ticks=1221605/0, in_queue=1221605, util=96.48% 00:22:34.866 nvme10n1: ios=11595/0, merge=0/0, ticks=1213568/0, in_queue=1213568, util=96.66% 00:22:34.866 nvme1n1: ios=16374/0, merge=0/0, ticks=1220977/0, in_queue=1220977, util=97.04% 00:22:34.866 nvme2n1: ios=12817/0, merge=0/0, ticks=1215357/0, in_queue=1215357, util=97.29% 00:22:34.866 nvme3n1: ios=15880/0, merge=0/0, ticks=1219488/0, in_queue=1219488, util=97.40% 00:22:34.866 nvme4n1: ios=16208/0, merge=0/0, ticks=1215711/0, in_queue=1215711, util=97.83% 00:22:34.866 nvme5n1: ios=12884/0, merge=0/0, ticks=1208869/0, in_queue=1208869, util=98.08% 00:22:34.866 nvme6n1: ios=17916/0, merge=0/0, ticks=1216027/0, in_queue=1216027, util=98.24% 00:22:34.866 nvme7n1: ios=32103/0, merge=0/0, ticks=1214948/0, in_queue=1214948, util=98.76% 00:22:34.866 nvme8n1: ios=15360/0, merge=0/0, ticks=1218480/0, in_queue=1218480, util=99.05% 00:22:34.866 nvme9n1: ios=36696/0, merge=0/0, ticks=1220785/0, in_queue=1220785, util=99.23% 00:22:34.866 08:15:03 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:34.866 [global] 00:22:34.866 thread=1 00:22:34.866 invalidate=1 00:22:34.866 rw=randwrite 00:22:34.866 time_based=1 00:22:34.866 runtime=10 00:22:34.866 ioengine=libaio 00:22:34.866 direct=1 00:22:34.866 bs=262144 00:22:34.866 iodepth=64 00:22:34.866 norandommap=1 00:22:34.866 numjobs=1 00:22:34.866 00:22:34.866 [job0] 00:22:34.866 filename=/dev/nvme0n1 00:22:34.866 [job1] 00:22:34.866 filename=/dev/nvme10n1 00:22:34.866 [job2] 00:22:34.866 filename=/dev/nvme1n1 00:22:34.866 [job3] 00:22:34.866 filename=/dev/nvme2n1 00:22:34.866 [job4] 00:22:34.866 filename=/dev/nvme3n1 00:22:34.866 [job5] 00:22:34.866 filename=/dev/nvme4n1 00:22:34.866 [job6] 00:22:34.866 filename=/dev/nvme5n1 00:22:34.866 [job7] 00:22:34.866 filename=/dev/nvme6n1 00:22:34.866 [job8] 00:22:34.866 filename=/dev/nvme7n1 00:22:34.866 [job9] 00:22:34.866 filename=/dev/nvme8n1 00:22:34.866 [job10] 00:22:34.866 filename=/dev/nvme9n1 00:22:34.866 Could not set queue depth (nvme0n1) 00:22:34.866 Could not set queue depth (nvme10n1) 00:22:34.866 Could not set queue depth (nvme1n1) 00:22:34.866 Could not set queue depth (nvme2n1) 00:22:34.866 Could not set queue depth (nvme3n1) 00:22:34.866 Could not set queue depth (nvme4n1) 00:22:34.866 Could not set queue depth (nvme5n1) 00:22:34.866 Could not set queue depth (nvme6n1) 00:22:34.866 Could not set queue depth (nvme7n1) 00:22:34.866 Could not set queue depth (nvme8n1) 00:22:34.866 Could not set queue depth (nvme9n1) 00:22:34.866 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:34.866 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:34.866 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:34.866 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:34.866 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:34.866 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:34.866 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:34.866 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:34.866 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:34.866 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:34.866 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:34.866 fio-3.35 00:22:34.866 Starting 11 threads 00:22:44.873 00:22:44.873 job0: (groupid=0, jobs=1): err= 0: pid=1117835: Tue Jun 11 08:15:14 2024 00:22:44.873 write: IOPS=717, BW=179MiB/s (188MB/s)(1803MiB/10056msec); 0 zone resets 00:22:44.873 slat (usec): min=20, max=31786, avg=1336.33, stdev=2520.48 00:22:44.873 clat (msec): min=3, max=158, avg=87.84, stdev=25.01 00:22:44.873 lat (msec): min=3, max=158, avg=89.17, stdev=25.35 00:22:44.873 clat percentiles (msec): 00:22:44.873 | 1.00th=[ 26], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 66], 00:22:44.873 | 30.00th=[ 77], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 99], 00:22:44.873 | 70.00th=[ 103], 80.00th=[ 105], 90.00th=[ 123], 95.00th=[ 132], 00:22:44.873 | 99.00th=[ 150], 99.50th=[ 155], 99.90th=[ 159], 99.95th=[ 159], 00:22:44.873 | 99.99th=[ 159] 00:22:44.873 bw ( KiB/s): min=110592, max=268800, per=8.75%, avg=183040.00, stdev=48640.57, samples=20 00:22:44.873 iops : min= 432, max= 1050, avg=715.00, stdev=190.00, samples=20 00:22:44.873 lat (msec) : 4=0.01%, 10=0.19%, 20=0.47%, 50=2.87%, 100=60.97% 00:22:44.873 lat (msec) : 250=35.48% 00:22:44.873 cpu : usr=1.68%, sys=1.94%, ctx=2123, majf=0, minf=1 00:22:44.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:22:44.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.873 issued rwts: total=0,7213,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.873 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.873 job1: (groupid=0, jobs=1): err= 0: pid=1117848: Tue Jun 11 08:15:14 2024 00:22:44.874 write: IOPS=752, BW=188MiB/s (197MB/s)(1904MiB/10119msec); 0 zone resets 00:22:44.874 slat (usec): min=20, max=44305, avg=1264.48, stdev=2357.50 00:22:44.874 clat (msec): min=4, max=251, avg=83.71, stdev=23.28 00:22:44.874 lat (msec): min=5, max=251, avg=84.97, stdev=23.55 00:22:44.874 clat percentiles (msec): 00:22:44.874 | 1.00th=[ 24], 5.00th=[ 55], 10.00th=[ 59], 20.00th=[ 63], 00:22:44.874 | 30.00th=[ 74], 40.00th=[ 79], 50.00th=[ 81], 60.00th=[ 91], 00:22:44.874 | 70.00th=[ 99], 80.00th=[ 103], 90.00th=[ 107], 95.00th=[ 120], 00:22:44.874 | 99.00th=[ 142], 99.50th=[ 167], 99.90th=[ 234], 99.95th=[ 243], 00:22:44.874 | 99.99th=[ 251] 00:22:44.874 bw ( KiB/s): min=125952, max=267264, per=9.24%, avg=193356.80, stdev=41689.93, samples=20 00:22:44.874 iops : min= 492, max= 1044, avg=755.30, stdev=162.85, samples=20 00:22:44.874 lat (msec) : 10=0.29%, 20=0.54%, 50=2.64%, 100=69.58%, 250=26.93% 00:22:44.874 lat (msec) : 500=0.03% 00:22:44.874 cpu : usr=1.67%, sys=2.10%, ctx=2176, majf=0, minf=1 00:22:44.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:44.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.874 issued rwts: total=0,7616,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.874 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.874 job2: (groupid=0, jobs=1): err= 0: pid=1117852: Tue Jun 11 08:15:14 2024 00:22:44.874 write: IOPS=669, BW=167MiB/s (175MB/s)(1682MiB/10053msec); 0 zone resets 00:22:44.874 slat (usec): min=25, max=26965, avg=1376.71, stdev=2616.10 00:22:44.874 clat (msec): min=6, max=173, avg=94.20, stdev=25.07 00:22:44.874 lat (msec): min=7, max=175, avg=95.58, stdev=25.30 00:22:44.874 clat percentiles (msec): 00:22:44.874 | 1.00th=[ 39], 5.00th=[ 54], 10.00th=[ 57], 20.00th=[ 75], 00:22:44.874 | 30.00th=[ 80], 40.00th=[ 91], 50.00th=[ 100], 60.00th=[ 103], 00:22:44.874 | 70.00th=[ 105], 80.00th=[ 112], 90.00th=[ 129], 95.00th=[ 132], 00:22:44.874 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 171], 99.95th=[ 174], 00:22:44.874 | 99.99th=[ 174] 00:22:44.874 bw ( KiB/s): min=115712, max=282624, per=8.15%, avg=170598.40, stdev=42916.67, samples=20 00:22:44.874 iops : min= 452, max= 1104, avg=666.40, stdev=167.64, samples=20 00:22:44.874 lat (msec) : 10=0.07%, 20=0.30%, 50=1.28%, 100=51.24%, 250=47.11% 00:22:44.874 cpu : usr=1.57%, sys=2.19%, ctx=2081, majf=0, minf=1 00:22:44.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:44.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.874 issued rwts: total=0,6727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.874 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.874 job3: (groupid=0, jobs=1): err= 0: pid=1117855: Tue Jun 11 08:15:14 2024 00:22:44.874 write: IOPS=617, BW=154MiB/s (162MB/s)(1562MiB/10118msec); 0 zone resets 00:22:44.874 slat (usec): min=17, max=279511, avg=1591.85, stdev=5379.10 00:22:44.874 clat (msec): min=10, max=355, avg=101.96, stdev=34.48 00:22:44.874 lat (msec): min=11, max=355, avg=103.56, stdev=34.66 00:22:44.874 clat percentiles (msec): 00:22:44.874 | 1.00th=[ 56], 5.00th=[ 74], 10.00th=[ 78], 20.00th=[ 81], 00:22:44.874 | 30.00th=[ 85], 40.00th=[ 92], 50.00th=[ 99], 60.00th=[ 103], 00:22:44.874 | 70.00th=[ 108], 80.00th=[ 115], 90.00th=[ 125], 95.00th=[ 136], 00:22:44.874 | 99.00th=[ 292], 99.50th=[ 317], 99.90th=[ 347], 99.95th=[ 351], 00:22:44.874 | 99.99th=[ 355] 00:22:44.874 bw ( KiB/s): min=120832, max=210432, per=7.57%, avg=158361.60, stdev=25497.99, samples=20 00:22:44.874 iops : min= 472, max= 822, avg=618.60, stdev=99.60, samples=20 00:22:44.874 lat (msec) : 20=0.22%, 50=0.62%, 100=54.22%, 250=43.27%, 500=1.66% 00:22:44.874 cpu : usr=1.28%, sys=2.16%, ctx=1572, majf=0, minf=1 00:22:44.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:44.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.874 issued rwts: total=0,6249,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.874 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.874 job4: (groupid=0, jobs=1): err= 0: pid=1117856: Tue Jun 11 08:15:14 2024 00:22:44.874 write: IOPS=1030, BW=258MiB/s (270MB/s)(2594MiB/10074msec); 0 zone resets 00:22:44.874 slat (usec): min=14, max=8266, avg=945.26, stdev=1731.04 00:22:44.874 clat (msec): min=6, max=155, avg=61.17, stdev=20.98 00:22:44.874 lat (msec): min=6, max=155, avg=62.11, stdev=21.25 00:22:44.874 clat percentiles (msec): 00:22:44.874 | 1.00th=[ 34], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 44], 00:22:44.874 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 55], 60.00th=[ 59], 00:22:44.874 | 70.00th=[ 75], 80.00th=[ 80], 90.00th=[ 88], 95.00th=[ 108], 00:22:44.874 | 99.00th=[ 112], 99.50th=[ 120], 99.90th=[ 140], 99.95th=[ 150], 00:22:44.874 | 99.99th=[ 157] 00:22:44.874 bw ( KiB/s): min=149504, max=396800, per=12.62%, avg=264038.40, stdev=79968.00, samples=20 00:22:44.874 iops : min= 584, max= 1550, avg=1031.40, stdev=312.38, samples=20 00:22:44.874 lat (msec) : 10=0.04%, 20=0.20%, 50=39.42%, 100=52.05%, 250=8.29% 00:22:44.874 cpu : usr=2.31%, sys=3.20%, ctx=2694, majf=0, minf=1 00:22:44.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:44.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.874 issued rwts: total=0,10377,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.874 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.874 job5: (groupid=0, jobs=1): err= 0: pid=1117857: Tue Jun 11 08:15:14 2024 00:22:44.874 write: IOPS=670, BW=168MiB/s (176MB/s)(1697MiB/10118msec); 0 zone resets 00:22:44.874 slat (usec): min=15, max=51866, avg=1391.25, stdev=2751.19 00:22:44.874 clat (msec): min=2, max=255, avg=93.72, stdev=26.66 00:22:44.874 lat (msec): min=2, max=255, avg=95.11, stdev=26.95 00:22:44.874 clat percentiles (msec): 00:22:44.874 | 1.00th=[ 15], 5.00th=[ 58], 10.00th=[ 72], 20.00th=[ 77], 00:22:44.874 | 30.00th=[ 80], 40.00th=[ 83], 50.00th=[ 95], 60.00th=[ 100], 00:22:44.874 | 70.00th=[ 103], 80.00th=[ 111], 90.00th=[ 128], 95.00th=[ 140], 00:22:44.874 | 99.00th=[ 165], 99.50th=[ 182], 99.90th=[ 239], 99.95th=[ 249], 00:22:44.874 | 99.99th=[ 255] 00:22:44.874 bw ( KiB/s): min=103936, max=257536, per=8.22%, avg=172108.80, stdev=38131.66, samples=20 00:22:44.874 iops : min= 406, max= 1006, avg=672.30, stdev=148.95, samples=20 00:22:44.874 lat (msec) : 4=0.07%, 10=0.41%, 20=1.02%, 50=1.65%, 100=58.77% 00:22:44.874 lat (msec) : 250=38.05%, 500=0.03% 00:22:44.874 cpu : usr=1.60%, sys=1.98%, ctx=2108, majf=0, minf=1 00:22:44.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:44.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.874 issued rwts: total=0,6786,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.874 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.874 job6: (groupid=0, jobs=1): err= 0: pid=1117858: Tue Jun 11 08:15:14 2024 00:22:44.874 write: IOPS=962, BW=241MiB/s (252MB/s)(2418MiB/10045msec); 0 zone resets 00:22:44.874 slat (usec): min=19, max=7880, avg=1029.18, stdev=1819.15 00:22:44.874 clat (msec): min=9, max=107, avg=65.42, stdev=16.17 00:22:44.874 lat (msec): min=9, max=107, avg=66.45, stdev=16.40 00:22:44.874 clat percentiles (msec): 00:22:44.874 | 1.00th=[ 42], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 54], 00:22:44.874 | 30.00th=[ 57], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 63], 00:22:44.874 | 70.00th=[ 67], 80.00th=[ 80], 90.00th=[ 95], 95.00th=[ 102], 00:22:44.874 | 99.00th=[ 105], 99.50th=[ 106], 99.90th=[ 107], 99.95th=[ 108], 00:22:44.874 | 99.99th=[ 108] 00:22:44.874 bw ( KiB/s): min=160256, max=327168, per=11.76%, avg=246016.00, stdev=54325.74, samples=20 00:22:44.874 iops : min= 626, max= 1278, avg=961.00, stdev=212.21, samples=20 00:22:44.874 lat (msec) : 10=0.04%, 20=0.08%, 50=12.97%, 100=79.74%, 250=7.16% 00:22:44.874 cpu : usr=2.33%, sys=2.99%, ctx=2405, majf=0, minf=1 00:22:44.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:44.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.874 issued rwts: total=0,9673,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.874 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.874 job7: (groupid=0, jobs=1): err= 0: pid=1117859: Tue Jun 11 08:15:14 2024 00:22:44.874 write: IOPS=656, BW=164MiB/s (172MB/s)(1661MiB/10119msec); 0 zone resets 00:22:44.874 slat (usec): min=22, max=44383, avg=1500.40, stdev=2734.39 00:22:44.874 clat (msec): min=14, max=249, avg=95.93, stdev=22.77 00:22:44.874 lat (msec): min=14, max=249, avg=97.43, stdev=22.96 00:22:44.874 clat percentiles (msec): 00:22:44.874 | 1.00th=[ 55], 5.00th=[ 63], 10.00th=[ 74], 20.00th=[ 79], 00:22:44.874 | 30.00th=[ 82], 40.00th=[ 86], 50.00th=[ 95], 60.00th=[ 100], 00:22:44.874 | 70.00th=[ 105], 80.00th=[ 113], 90.00th=[ 124], 95.00th=[ 136], 00:22:44.874 | 99.00th=[ 157], 99.50th=[ 182], 99.90th=[ 232], 99.95th=[ 241], 00:22:44.874 | 99.99th=[ 249] 00:22:44.874 bw ( KiB/s): min=110592, max=238080, per=8.05%, avg=168499.20, stdev=34469.07, samples=20 00:22:44.874 iops : min= 432, max= 930, avg=658.20, stdev=134.64, samples=20 00:22:44.874 lat (msec) : 20=0.12%, 50=0.30%, 100=60.69%, 250=38.89% 00:22:44.874 cpu : usr=1.65%, sys=2.00%, ctx=1699, majf=0, minf=1 00:22:44.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:44.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.874 issued rwts: total=0,6645,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.874 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.874 job8: (groupid=0, jobs=1): err= 0: pid=1117861: Tue Jun 11 08:15:14 2024 00:22:44.874 write: IOPS=773, BW=193MiB/s (203MB/s)(1957MiB/10126msec); 0 zone resets 00:22:44.874 slat (usec): min=24, max=43903, avg=1146.68, stdev=2454.00 00:22:44.874 clat (msec): min=2, max=274, avg=81.60, stdev=33.37 00:22:44.874 lat (msec): min=3, max=274, avg=82.74, stdev=33.82 00:22:44.874 clat percentiles (msec): 00:22:44.874 | 1.00th=[ 12], 5.00th=[ 28], 10.00th=[ 51], 20.00th=[ 58], 00:22:44.874 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 77], 60.00th=[ 94], 00:22:44.874 | 70.00th=[ 103], 80.00th=[ 110], 90.00th=[ 126], 95.00th=[ 133], 00:22:44.875 | 99.00th=[ 155], 99.50th=[ 190], 99.90th=[ 257], 99.95th=[ 266], 00:22:44.875 | 99.99th=[ 275] 00:22:44.875 bw ( KiB/s): min=126976, max=328192, per=9.50%, avg=198784.00, stdev=55740.06, samples=20 00:22:44.875 iops : min= 496, max= 1282, avg=776.50, stdev=217.73, samples=20 00:22:44.875 lat (msec) : 4=0.03%, 10=0.79%, 20=2.41%, 50=6.62%, 100=55.21% 00:22:44.875 lat (msec) : 250=34.82%, 500=0.13% 00:22:44.875 cpu : usr=1.74%, sys=2.47%, ctx=2831, majf=0, minf=1 00:22:44.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:44.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.875 issued rwts: total=0,7829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.875 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.875 job9: (groupid=0, jobs=1): err= 0: pid=1117862: Tue Jun 11 08:15:14 2024 00:22:44.875 write: IOPS=684, BW=171MiB/s (180MB/s)(1725MiB/10075msec); 0 zone resets 00:22:44.875 slat (usec): min=17, max=18295, avg=1386.71, stdev=2544.90 00:22:44.875 clat (msec): min=2, max=156, avg=92.03, stdev=23.43 00:22:44.875 lat (msec): min=2, max=156, avg=93.42, stdev=23.73 00:22:44.875 clat percentiles (msec): 00:22:44.875 | 1.00th=[ 17], 5.00th=[ 60], 10.00th=[ 72], 20.00th=[ 75], 00:22:44.875 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 95], 60.00th=[ 103], 00:22:44.875 | 70.00th=[ 105], 80.00th=[ 110], 90.00th=[ 124], 95.00th=[ 131], 00:22:44.875 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 150], 00:22:44.875 | 99.99th=[ 157] 00:22:44.875 bw ( KiB/s): min=120832, max=292352, per=8.36%, avg=175027.20, stdev=40878.50, samples=20 00:22:44.875 iops : min= 472, max= 1142, avg=683.70, stdev=159.68, samples=20 00:22:44.875 lat (msec) : 4=0.04%, 10=0.36%, 20=0.90%, 50=2.45%, 100=52.83% 00:22:44.875 lat (msec) : 250=43.42% 00:22:44.875 cpu : usr=1.81%, sys=2.15%, ctx=2069, majf=0, minf=1 00:22:44.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:44.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.875 issued rwts: total=0,6900,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.875 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.875 job10: (groupid=0, jobs=1): err= 0: pid=1117863: Tue Jun 11 08:15:14 2024 00:22:44.875 write: IOPS=668, BW=167MiB/s (175MB/s)(1691MiB/10121msec); 0 zone resets 00:22:44.875 slat (usec): min=24, max=45739, avg=1419.37, stdev=2717.33 00:22:44.875 clat (msec): min=5, max=245, avg=94.31, stdev=24.93 00:22:44.875 lat (msec): min=6, max=245, avg=95.73, stdev=25.20 00:22:44.875 clat percentiles (msec): 00:22:44.875 | 1.00th=[ 24], 5.00th=[ 54], 10.00th=[ 63], 20.00th=[ 79], 00:22:44.875 | 30.00th=[ 84], 40.00th=[ 89], 50.00th=[ 97], 60.00th=[ 101], 00:22:44.875 | 70.00th=[ 104], 80.00th=[ 109], 90.00th=[ 122], 95.00th=[ 136], 00:22:44.875 | 99.00th=[ 161], 99.50th=[ 178], 99.90th=[ 230], 99.95th=[ 239], 00:22:44.875 | 99.99th=[ 247] 00:22:44.875 bw ( KiB/s): min=118784, max=243712, per=8.20%, avg=171545.60, stdev=34181.50, samples=20 00:22:44.875 iops : min= 464, max= 952, avg=670.10, stdev=133.52, samples=20 00:22:44.875 lat (msec) : 10=0.06%, 20=0.67%, 50=3.27%, 100=54.55%, 250=41.45% 00:22:44.875 cpu : usr=1.41%, sys=2.04%, ctx=2015, majf=0, minf=1 00:22:44.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:44.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.875 issued rwts: total=0,6764,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.875 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.875 00:22:44.875 Run status group 0 (all jobs): 00:22:44.875 WRITE: bw=2044MiB/s (2143MB/s), 154MiB/s-258MiB/s (162MB/s-270MB/s), io=20.2GiB (21.7GB), run=10045-10126msec 00:22:44.875 00:22:44.875 Disk stats (read/write): 00:22:44.875 nvme0n1: ios=47/13984, merge=0/0, ticks=1907/1200360, in_queue=1202267, util=100.00% 00:22:44.875 nvme10n1: ios=50/15200, merge=0/0, ticks=2496/1225897, in_queue=1228393, util=100.00% 00:22:44.875 nvme1n1: ios=43/12990, merge=0/0, ticks=821/1202779, in_queue=1203600, util=100.00% 00:22:44.875 nvme2n1: ios=42/12467, merge=0/0, ticks=2638/1166913, in_queue=1169551, util=100.00% 00:22:44.875 nvme3n1: ios=0/20385, merge=0/0, ticks=0/1198221, in_queue=1198221, util=97.26% 00:22:44.875 nvme4n1: ios=50/13541, merge=0/0, ticks=1195/1223546, in_queue=1224741, util=100.00% 00:22:44.875 nvme5n1: ios=0/18800, merge=0/0, ticks=0/1199966, in_queue=1199966, util=97.93% 00:22:44.875 nvme6n1: ios=0/13258, merge=0/0, ticks=0/1225126, in_queue=1225126, util=98.15% 00:22:44.875 nvme7n1: ios=38/15614, merge=0/0, ticks=1578/1229978, in_queue=1231556, util=99.89% 00:22:44.875 nvme8n1: ios=0/13431, merge=0/0, ticks=0/1199405, in_queue=1199405, util=98.88% 00:22:44.875 nvme9n1: ios=0/13492, merge=0/0, ticks=0/1228200, in_queue=1228200, util=99.10% 00:22:44.875 08:15:14 -- target/multiconnection.sh@36 -- # sync 00:22:44.875 08:15:14 -- target/multiconnection.sh@37 -- # seq 1 11 00:22:44.875 08:15:14 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:44.875 08:15:14 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:44.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:44.875 08:15:15 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:44.875 08:15:15 -- common/autotest_common.sh@1198 -- # local i=0 00:22:44.875 08:15:15 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:44.875 08:15:15 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:22:44.875 08:15:15 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:44.875 08:15:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:22:44.875 08:15:15 -- common/autotest_common.sh@1210 -- # return 0 00:22:44.875 08:15:15 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:44.875 08:15:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:44.875 08:15:15 -- common/autotest_common.sh@10 -- # set +x 00:22:44.875 08:15:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:44.875 08:15:15 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:44.875 08:15:15 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:44.875 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:44.875 08:15:15 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:44.875 08:15:15 -- common/autotest_common.sh@1198 -- # local i=0 00:22:44.875 08:15:15 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:44.875 08:15:15 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:22:44.875 08:15:15 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:44.875 08:15:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:22:44.875 08:15:15 -- common/autotest_common.sh@1210 -- # return 0 00:22:44.875 08:15:15 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:44.875 08:15:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:44.875 08:15:15 -- common/autotest_common.sh@10 -- # set +x 00:22:44.875 08:15:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:44.875 08:15:15 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:44.875 08:15:15 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:45.136 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:45.136 08:15:15 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:45.136 08:15:15 -- common/autotest_common.sh@1198 -- # local i=0 00:22:45.136 08:15:15 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:22:45.136 08:15:15 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:45.136 08:15:15 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:45.136 08:15:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:22:45.136 08:15:15 -- common/autotest_common.sh@1210 -- # return 0 00:22:45.136 08:15:15 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:45.136 08:15:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.136 08:15:15 -- common/autotest_common.sh@10 -- # set +x 00:22:45.136 08:15:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:45.136 08:15:15 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:45.136 08:15:15 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:45.396 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:45.396 08:15:15 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:45.396 08:15:15 -- common/autotest_common.sh@1198 -- # local i=0 00:22:45.396 08:15:15 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:45.396 08:15:15 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:22:45.396 08:15:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:22:45.396 08:15:15 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:45.396 08:15:15 -- common/autotest_common.sh@1210 -- # return 0 00:22:45.396 08:15:15 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:45.396 08:15:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.396 08:15:15 -- common/autotest_common.sh@10 -- # set +x 00:22:45.396 08:15:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:45.396 08:15:15 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:45.396 08:15:15 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:45.657 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:45.657 08:15:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:45.657 08:15:16 -- common/autotest_common.sh@1198 -- # local i=0 00:22:45.657 08:15:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:45.657 08:15:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:22:45.657 08:15:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:45.657 08:15:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:22:45.657 08:15:16 -- common/autotest_common.sh@1210 -- # return 0 00:22:45.657 08:15:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:45.657 08:15:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.657 08:15:16 -- common/autotest_common.sh@10 -- # set +x 00:22:45.657 08:15:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:45.657 08:15:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:45.657 08:15:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:45.920 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:45.920 08:15:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:45.920 08:15:16 -- common/autotest_common.sh@1198 -- # local i=0 00:22:45.920 08:15:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:45.920 08:15:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:22:45.920 08:15:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:45.921 08:15:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:22:45.921 08:15:16 -- common/autotest_common.sh@1210 -- # return 0 00:22:45.921 08:15:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:45.921 08:15:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.921 08:15:16 -- common/autotest_common.sh@10 -- # set +x 00:22:45.921 08:15:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:45.921 08:15:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:45.921 08:15:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:45.921 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:45.921 08:15:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:45.921 08:15:16 -- common/autotest_common.sh@1198 -- # local i=0 00:22:45.921 08:15:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:45.921 08:15:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:22:45.921 08:15:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:45.921 08:15:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:22:45.921 08:15:16 -- common/autotest_common.sh@1210 -- # return 0 00:22:45.921 08:15:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:45.921 08:15:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:45.921 08:15:16 -- common/autotest_common.sh@10 -- # set +x 00:22:46.221 08:15:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.221 08:15:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:46.221 08:15:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:46.221 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:46.221 08:15:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:46.221 08:15:16 -- common/autotest_common.sh@1198 -- # local i=0 00:22:46.221 08:15:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:46.221 08:15:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:22:46.221 08:15:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:46.221 08:15:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:22:46.221 08:15:16 -- common/autotest_common.sh@1210 -- # return 0 00:22:46.221 08:15:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:46.221 08:15:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.221 08:15:16 -- common/autotest_common.sh@10 -- # set +x 00:22:46.221 08:15:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.221 08:15:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:46.221 08:15:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:46.537 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:46.537 08:15:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:46.537 08:15:16 -- common/autotest_common.sh@1198 -- # local i=0 00:22:46.537 08:15:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:46.537 08:15:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:22:46.537 08:15:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:46.537 08:15:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:22:46.537 08:15:16 -- common/autotest_common.sh@1210 -- # return 0 00:22:46.537 08:15:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:46.537 08:15:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.537 08:15:16 -- common/autotest_common.sh@10 -- # set +x 00:22:46.537 08:15:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.537 08:15:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:46.537 08:15:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:46.537 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:46.537 08:15:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:46.537 08:15:17 -- common/autotest_common.sh@1198 -- # local i=0 00:22:46.537 08:15:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:46.537 08:15:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:22:46.537 08:15:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:46.537 08:15:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:22:46.537 08:15:17 -- common/autotest_common.sh@1210 -- # return 0 00:22:46.537 08:15:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:46.537 08:15:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.537 08:15:17 -- common/autotest_common.sh@10 -- # set +x 00:22:46.537 08:15:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.537 08:15:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:46.537 08:15:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:46.537 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:46.537 08:15:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:46.537 08:15:17 -- common/autotest_common.sh@1198 -- # local i=0 00:22:46.537 08:15:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:46.537 08:15:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:22:46.798 08:15:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:46.798 08:15:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:22:46.798 08:15:17 -- common/autotest_common.sh@1210 -- # return 0 00:22:46.798 08:15:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:46.798 08:15:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.798 08:15:17 -- common/autotest_common.sh@10 -- # set +x 00:22:46.798 08:15:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.798 08:15:17 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:46.798 08:15:17 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:46.798 08:15:17 -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:46.798 08:15:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:46.798 08:15:17 -- nvmf/common.sh@116 -- # sync 00:22:46.798 08:15:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:46.798 08:15:17 -- nvmf/common.sh@119 -- # set +e 00:22:46.798 08:15:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:46.798 08:15:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:46.798 rmmod nvme_tcp 00:22:46.798 rmmod nvme_fabrics 00:22:46.798 rmmod nvme_keyring 00:22:46.798 08:15:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:46.798 08:15:17 -- nvmf/common.sh@123 -- # set -e 00:22:46.798 08:15:17 -- nvmf/common.sh@124 -- # return 0 00:22:46.798 08:15:17 -- nvmf/common.sh@477 -- # '[' -n 1107125 ']' 00:22:46.798 08:15:17 -- nvmf/common.sh@478 -- # killprocess 1107125 00:22:46.798 08:15:17 -- common/autotest_common.sh@926 -- # '[' -z 1107125 ']' 00:22:46.798 08:15:17 -- common/autotest_common.sh@930 -- # kill -0 1107125 00:22:46.798 08:15:17 -- common/autotest_common.sh@931 -- # uname 00:22:46.798 08:15:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:46.798 08:15:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1107125 00:22:46.798 08:15:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:46.798 08:15:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:46.798 08:15:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1107125' 00:22:46.798 killing process with pid 1107125 00:22:46.798 08:15:17 -- common/autotest_common.sh@945 -- # kill 1107125 00:22:46.798 08:15:17 -- common/autotest_common.sh@950 -- # wait 1107125 00:22:47.059 08:15:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:47.059 08:15:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:47.059 08:15:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:47.059 08:15:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:47.059 08:15:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:47.059 08:15:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.059 08:15:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.059 08:15:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.603 08:15:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:49.603 00:22:49.603 real 1m16.211s 00:22:49.603 user 4m49.270s 00:22:49.603 sys 0m23.402s 00:22:49.603 08:15:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:49.603 08:15:19 -- common/autotest_common.sh@10 -- # set +x 00:22:49.603 ************************************ 00:22:49.603 END TEST nvmf_multiconnection 00:22:49.603 ************************************ 00:22:49.603 08:15:19 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:49.603 08:15:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:49.603 08:15:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:49.603 08:15:19 -- common/autotest_common.sh@10 -- # set +x 00:22:49.603 ************************************ 00:22:49.603 START TEST nvmf_initiator_timeout 00:22:49.603 ************************************ 00:22:49.603 08:15:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:49.603 * Looking for test storage... 00:22:49.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:49.603 08:15:19 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.603 08:15:19 -- nvmf/common.sh@7 -- # uname -s 00:22:49.603 08:15:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.603 08:15:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.603 08:15:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.603 08:15:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.603 08:15:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.603 08:15:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.603 08:15:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.603 08:15:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.603 08:15:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.603 08:15:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.603 08:15:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:49.604 08:15:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:49.604 08:15:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.604 08:15:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.604 08:15:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.604 08:15:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.604 08:15:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.604 08:15:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.604 08:15:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.604 08:15:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.604 08:15:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.604 08:15:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.604 08:15:19 -- paths/export.sh@5 -- # export PATH 00:22:49.604 08:15:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.604 08:15:19 -- nvmf/common.sh@46 -- # : 0 00:22:49.604 08:15:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:49.604 08:15:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:49.604 08:15:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:49.604 08:15:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.604 08:15:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.604 08:15:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:49.604 08:15:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:49.604 08:15:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:49.604 08:15:19 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:49.604 08:15:19 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:49.604 08:15:19 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:49.604 08:15:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:49.604 08:15:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.604 08:15:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:49.604 08:15:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:49.604 08:15:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:49.604 08:15:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.604 08:15:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:49.604 08:15:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.604 08:15:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:49.604 08:15:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:49.604 08:15:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:49.604 08:15:19 -- common/autotest_common.sh@10 -- # set +x 00:22:56.188 08:15:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:56.188 08:15:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:56.188 08:15:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:56.188 08:15:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:56.188 08:15:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:56.188 08:15:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:56.188 08:15:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:56.188 08:15:26 -- nvmf/common.sh@294 -- # net_devs=() 00:22:56.188 08:15:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:56.188 08:15:26 -- nvmf/common.sh@295 -- # e810=() 00:22:56.188 08:15:26 -- nvmf/common.sh@295 -- # local -ga e810 00:22:56.188 08:15:26 -- nvmf/common.sh@296 -- # x722=() 00:22:56.188 08:15:26 -- nvmf/common.sh@296 -- # local -ga x722 00:22:56.188 08:15:26 -- nvmf/common.sh@297 -- # mlx=() 00:22:56.188 08:15:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:56.188 08:15:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.188 08:15:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.188 08:15:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.188 08:15:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.188 08:15:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.188 08:15:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.188 08:15:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.188 08:15:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.188 08:15:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.188 08:15:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.188 08:15:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.188 08:15:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:56.188 08:15:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:56.188 08:15:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:56.188 08:15:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:56.188 08:15:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:56.188 08:15:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:56.188 08:15:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:56.188 08:15:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:56.188 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:56.188 08:15:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:56.188 08:15:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:56.188 08:15:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.189 08:15:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.189 08:15:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:56.189 08:15:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:56.189 08:15:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:56.189 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:56.189 08:15:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:56.189 08:15:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:56.189 08:15:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.189 08:15:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.189 08:15:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:56.189 08:15:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:56.189 08:15:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:56.189 08:15:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:56.189 08:15:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:56.189 08:15:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.189 08:15:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:56.189 08:15:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.189 08:15:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:56.189 Found net devices under 0000:31:00.0: cvl_0_0 00:22:56.189 08:15:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.189 08:15:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:56.189 08:15:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.189 08:15:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:56.189 08:15:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.189 08:15:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:56.189 Found net devices under 0000:31:00.1: cvl_0_1 00:22:56.189 08:15:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.189 08:15:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:56.189 08:15:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:56.189 08:15:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:56.189 08:15:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:56.189 08:15:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:56.189 08:15:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.189 08:15:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.189 08:15:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:56.189 08:15:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:56.189 08:15:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:56.189 08:15:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:56.189 08:15:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:56.189 08:15:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:56.189 08:15:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.189 08:15:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:56.189 08:15:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:56.189 08:15:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:56.189 08:15:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:56.450 08:15:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:56.450 08:15:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:56.450 08:15:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:56.450 08:15:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:56.450 08:15:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:56.450 08:15:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:56.450 08:15:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:56.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:56.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:22:56.450 00:22:56.450 --- 10.0.0.2 ping statistics --- 00:22:56.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.450 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:22:56.450 08:15:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:56.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:56.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:22:56.450 00:22:56.450 --- 10.0.0.1 ping statistics --- 00:22:56.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.450 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:22:56.450 08:15:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:56.450 08:15:27 -- nvmf/common.sh@410 -- # return 0 00:22:56.450 08:15:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:56.450 08:15:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:56.450 08:15:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:56.450 08:15:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:56.450 08:15:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:56.450 08:15:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:56.450 08:15:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:56.450 08:15:27 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:56.450 08:15:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:56.450 08:15:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:56.450 08:15:27 -- common/autotest_common.sh@10 -- # set +x 00:22:56.450 08:15:27 -- nvmf/common.sh@469 -- # nvmfpid=1125025 00:22:56.450 08:15:27 -- nvmf/common.sh@470 -- # waitforlisten 1125025 00:22:56.450 08:15:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:56.450 08:15:27 -- common/autotest_common.sh@819 -- # '[' -z 1125025 ']' 00:22:56.450 08:15:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.450 08:15:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:56.450 08:15:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.450 08:15:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:56.450 08:15:27 -- common/autotest_common.sh@10 -- # set +x 00:22:56.450 [2024-06-11 08:15:27.084527] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:56.450 [2024-06-11 08:15:27.084576] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.710 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.710 [2024-06-11 08:15:27.150822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:56.710 [2024-06-11 08:15:27.213938] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:56.710 [2024-06-11 08:15:27.214068] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.710 [2024-06-11 08:15:27.214078] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.710 [2024-06-11 08:15:27.214086] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.710 [2024-06-11 08:15:27.214228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.710 [2024-06-11 08:15:27.214334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.710 [2024-06-11 08:15:27.217469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:56.710 [2024-06-11 08:15:27.217608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.280 08:15:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:57.280 08:15:27 -- common/autotest_common.sh@852 -- # return 0 00:22:57.280 08:15:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:57.280 08:15:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:57.280 08:15:27 -- common/autotest_common.sh@10 -- # set +x 00:22:57.280 08:15:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.280 08:15:27 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:57.280 08:15:27 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:57.280 08:15:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.280 08:15:27 -- common/autotest_common.sh@10 -- # set +x 00:22:57.280 Malloc0 00:22:57.280 08:15:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.280 08:15:27 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:57.280 08:15:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.280 08:15:27 -- common/autotest_common.sh@10 -- # set +x 00:22:57.280 Delay0 00:22:57.280 08:15:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.280 08:15:27 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:57.280 08:15:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.280 08:15:27 -- common/autotest_common.sh@10 -- # set +x 00:22:57.541 [2024-06-11 08:15:27.928849] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.541 08:15:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.541 08:15:27 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:57.541 08:15:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.541 08:15:27 -- common/autotest_common.sh@10 -- # set +x 00:22:57.541 08:15:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.541 08:15:27 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:57.541 08:15:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.541 08:15:27 -- common/autotest_common.sh@10 -- # set +x 00:22:57.541 08:15:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.541 08:15:27 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:57.541 08:15:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.541 08:15:27 -- common/autotest_common.sh@10 -- # set +x 00:22:57.541 [2024-06-11 08:15:27.965898] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.541 08:15:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.541 08:15:27 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:58.924 08:15:29 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:58.925 08:15:29 -- common/autotest_common.sh@1177 -- # local i=0 00:22:58.925 08:15:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:58.925 08:15:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:58.925 08:15:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:00.840 08:15:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:00.840 08:15:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:00.840 08:15:31 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:23:00.840 08:15:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:00.840 08:15:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:00.840 08:15:31 -- common/autotest_common.sh@1187 -- # return 0 00:23:00.840 08:15:31 -- target/initiator_timeout.sh@35 -- # fio_pid=1125933 00:23:00.840 08:15:31 -- target/initiator_timeout.sh@37 -- # sleep 3 00:23:00.840 08:15:31 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:23:00.840 [global] 00:23:00.840 thread=1 00:23:00.840 invalidate=1 00:23:00.840 rw=write 00:23:00.840 time_based=1 00:23:00.840 runtime=60 00:23:00.840 ioengine=libaio 00:23:00.840 direct=1 00:23:00.840 bs=4096 00:23:00.840 iodepth=1 00:23:00.840 norandommap=0 00:23:00.840 numjobs=1 00:23:00.840 00:23:00.840 verify_dump=1 00:23:00.840 verify_backlog=512 00:23:00.840 verify_state_save=0 00:23:00.840 do_verify=1 00:23:00.840 verify=crc32c-intel 00:23:00.840 [job0] 00:23:00.840 filename=/dev/nvme0n1 00:23:01.124 Could not set queue depth (nvme0n1) 00:23:01.384 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:01.384 fio-3.35 00:23:01.384 Starting 1 thread 00:23:03.953 08:15:34 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:03.953 08:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.953 08:15:34 -- common/autotest_common.sh@10 -- # set +x 00:23:03.953 true 00:23:03.953 08:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.953 08:15:34 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:03.953 08:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.953 08:15:34 -- common/autotest_common.sh@10 -- # set +x 00:23:03.953 true 00:23:03.953 08:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.953 08:15:34 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:03.953 08:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.953 08:15:34 -- common/autotest_common.sh@10 -- # set +x 00:23:03.953 true 00:23:03.953 08:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.953 08:15:34 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:03.953 08:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.953 08:15:34 -- common/autotest_common.sh@10 -- # set +x 00:23:03.953 true 00:23:03.953 08:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.953 08:15:34 -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:07.251 08:15:37 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:07.251 08:15:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:07.251 08:15:37 -- common/autotest_common.sh@10 -- # set +x 00:23:07.251 true 00:23:07.251 08:15:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:07.251 08:15:37 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:07.251 08:15:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:07.251 08:15:37 -- common/autotest_common.sh@10 -- # set +x 00:23:07.251 true 00:23:07.251 08:15:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:07.251 08:15:37 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:07.251 08:15:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:07.251 08:15:37 -- common/autotest_common.sh@10 -- # set +x 00:23:07.251 true 00:23:07.251 08:15:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:07.251 08:15:37 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:07.251 08:15:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:07.251 08:15:37 -- common/autotest_common.sh@10 -- # set +x 00:23:07.251 true 00:23:07.251 08:15:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:07.251 08:15:37 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:07.251 08:15:37 -- target/initiator_timeout.sh@54 -- # wait 1125933 00:24:03.514 00:24:03.514 job0: (groupid=0, jobs=1): err= 0: pid=1126239: Tue Jun 11 08:16:31 2024 00:24:03.514 read: IOPS=42, BW=171KiB/s (175kB/s)(10.0MiB/60001msec) 00:24:03.514 slat (usec): min=7, max=7626, avg=29.69, stdev=150.24 00:24:03.514 clat (usec): min=766, max=41758k, avg=22754.40, stdev=825305.46 00:24:03.514 lat (usec): min=793, max=41758k, avg=22784.09, stdev=825305.87 00:24:03.514 clat percentiles (usec): 00:24:03.514 | 1.00th=[ 848], 5.00th=[ 947], 10.00th=[ 979], 00:24:03.514 | 20.00th=[ 1029], 30.00th=[ 1057], 40.00th=[ 1090], 00:24:03.514 | 50.00th=[ 1123], 60.00th=[ 1139], 70.00th=[ 1172], 00:24:03.514 | 80.00th=[ 1205], 90.00th=[ 41681], 95.00th=[ 42206], 00:24:03.514 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 43254], 00:24:03.514 | 99.95th=[ 43779], 99.99th=[17112761] 00:24:03.514 write: IOPS=44, BW=179KiB/s (183kB/s)(10.5MiB/60001msec); 0 zone resets 00:24:03.514 slat (usec): min=9, max=31274, avg=42.27, stdev=603.83 00:24:03.514 clat (usec): min=208, max=968, avg=566.33, stdev=109.59 00:24:03.514 lat (usec): min=220, max=31996, avg=608.60, stdev=617.26 00:24:03.514 clat percentiles (usec): 00:24:03.514 | 1.00th=[ 306], 5.00th=[ 388], 10.00th=[ 420], 20.00th=[ 478], 00:24:03.514 | 30.00th=[ 515], 40.00th=[ 545], 50.00th=[ 578], 60.00th=[ 594], 00:24:03.514 | 70.00th=[ 627], 80.00th=[ 660], 90.00th=[ 701], 95.00th=[ 734], 00:24:03.514 | 99.00th=[ 832], 99.50th=[ 873], 99.90th=[ 922], 99.95th=[ 947], 00:24:03.514 | 99.99th=[ 971] 00:24:03.514 bw ( KiB/s): min= 848, max= 4096, per=100.00%, avg=2560.00, stdev=1190.99, samples=8 00:24:03.514 iops : min= 212, max= 1024, avg=640.00, stdev=297.75, samples=8 00:24:03.514 lat (usec) : 250=0.17%, 500=13.06%, 750=36.06%, 1000=8.67% 00:24:03.514 lat (msec) : 2=35.61%, 50=6.41%, >=2000=0.02% 00:24:03.514 cpu : usr=0.21%, sys=0.31%, ctx=5243, majf=0, minf=1 00:24:03.514 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:03.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.514 issued rwts: total=2560,2678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:03.514 00:24:03.514 Run status group 0 (all jobs): 00:24:03.514 READ: bw=171KiB/s (175kB/s), 171KiB/s-171KiB/s (175kB/s-175kB/s), io=10.0MiB (10.5MB), run=60001-60001msec 00:24:03.514 WRITE: bw=179KiB/s (183kB/s), 179KiB/s-179KiB/s (183kB/s-183kB/s), io=10.5MiB (11.0MB), run=60001-60001msec 00:24:03.514 00:24:03.514 Disk stats (read/write): 00:24:03.514 nvme0n1: ios=2587/2560, merge=0/0, ticks=17625/1129, in_queue=18754, util=99.79% 00:24:03.514 08:16:31 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:03.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:03.514 08:16:32 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:03.514 08:16:32 -- common/autotest_common.sh@1198 -- # local i=0 00:24:03.514 08:16:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:03.514 08:16:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:03.514 08:16:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:03.514 08:16:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:03.514 08:16:32 -- common/autotest_common.sh@1210 -- # return 0 00:24:03.514 08:16:32 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:03.514 08:16:32 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:03.514 nvmf hotplug test: fio successful as expected 00:24:03.514 08:16:32 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:03.514 08:16:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:03.514 08:16:32 -- common/autotest_common.sh@10 -- # set +x 00:24:03.514 08:16:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:03.514 08:16:32 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:03.514 08:16:32 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:03.514 08:16:32 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:03.514 08:16:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:03.514 08:16:32 -- nvmf/common.sh@116 -- # sync 00:24:03.514 08:16:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:03.514 08:16:32 -- nvmf/common.sh@119 -- # set +e 00:24:03.514 08:16:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:03.514 08:16:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:03.514 rmmod nvme_tcp 00:24:03.514 rmmod nvme_fabrics 00:24:03.514 rmmod nvme_keyring 00:24:03.514 08:16:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:03.514 08:16:32 -- nvmf/common.sh@123 -- # set -e 00:24:03.514 08:16:32 -- nvmf/common.sh@124 -- # return 0 00:24:03.514 08:16:32 -- nvmf/common.sh@477 -- # '[' -n 1125025 ']' 00:24:03.514 08:16:32 -- nvmf/common.sh@478 -- # killprocess 1125025 00:24:03.514 08:16:32 -- common/autotest_common.sh@926 -- # '[' -z 1125025 ']' 00:24:03.514 08:16:32 -- common/autotest_common.sh@930 -- # kill -0 1125025 00:24:03.514 08:16:32 -- common/autotest_common.sh@931 -- # uname 00:24:03.514 08:16:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:03.514 08:16:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1125025 00:24:03.514 08:16:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:03.514 08:16:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:03.514 08:16:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1125025' 00:24:03.514 killing process with pid 1125025 00:24:03.514 08:16:32 -- common/autotest_common.sh@945 -- # kill 1125025 00:24:03.514 08:16:32 -- common/autotest_common.sh@950 -- # wait 1125025 00:24:03.514 08:16:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:03.514 08:16:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:03.514 08:16:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:03.514 08:16:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:03.514 08:16:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:03.514 08:16:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.514 08:16:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:03.514 08:16:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.775 08:16:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:03.775 00:24:03.775 real 1m14.669s 00:24:03.775 user 4m36.424s 00:24:03.775 sys 0m6.977s 00:24:03.775 08:16:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:03.775 08:16:34 -- common/autotest_common.sh@10 -- # set +x 00:24:03.775 ************************************ 00:24:03.775 END TEST nvmf_initiator_timeout 00:24:03.775 ************************************ 00:24:04.036 08:16:34 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:24:04.036 08:16:34 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:24:04.036 08:16:34 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:24:04.036 08:16:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:04.036 08:16:34 -- common/autotest_common.sh@10 -- # set +x 00:24:10.623 08:16:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:10.623 08:16:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:10.623 08:16:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:10.623 08:16:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:10.623 08:16:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:10.623 08:16:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:10.623 08:16:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:10.623 08:16:41 -- nvmf/common.sh@294 -- # net_devs=() 00:24:10.623 08:16:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:10.623 08:16:41 -- nvmf/common.sh@295 -- # e810=() 00:24:10.623 08:16:41 -- nvmf/common.sh@295 -- # local -ga e810 00:24:10.623 08:16:41 -- nvmf/common.sh@296 -- # x722=() 00:24:10.623 08:16:41 -- nvmf/common.sh@296 -- # local -ga x722 00:24:10.623 08:16:41 -- nvmf/common.sh@297 -- # mlx=() 00:24:10.623 08:16:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:10.623 08:16:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.623 08:16:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.623 08:16:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.623 08:16:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.623 08:16:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.623 08:16:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.623 08:16:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.623 08:16:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.623 08:16:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.623 08:16:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.623 08:16:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.623 08:16:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:10.623 08:16:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:10.623 08:16:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:10.623 08:16:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:10.623 08:16:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:10.623 08:16:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:10.623 08:16:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:10.623 08:16:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:10.623 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:10.623 08:16:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:10.623 08:16:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:10.623 08:16:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.623 08:16:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.623 08:16:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:10.623 08:16:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:10.623 08:16:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:10.623 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:10.623 08:16:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:10.623 08:16:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:10.623 08:16:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.623 08:16:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.623 08:16:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:10.623 08:16:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:10.623 08:16:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:10.623 08:16:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:10.623 08:16:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:10.623 08:16:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.623 08:16:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:10.623 08:16:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.623 08:16:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:10.623 Found net devices under 0000:31:00.0: cvl_0_0 00:24:10.623 08:16:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.623 08:16:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:10.623 08:16:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.623 08:16:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:10.623 08:16:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.623 08:16:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:10.623 Found net devices under 0000:31:00.1: cvl_0_1 00:24:10.624 08:16:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.624 08:16:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:10.624 08:16:41 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.624 08:16:41 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:24:10.624 08:16:41 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:10.624 08:16:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:10.624 08:16:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:10.624 08:16:41 -- common/autotest_common.sh@10 -- # set +x 00:24:10.624 ************************************ 00:24:10.624 START TEST nvmf_perf_adq 00:24:10.624 ************************************ 00:24:10.624 08:16:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:10.624 * Looking for test storage... 00:24:10.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:10.624 08:16:41 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.624 08:16:41 -- nvmf/common.sh@7 -- # uname -s 00:24:10.624 08:16:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.624 08:16:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.624 08:16:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.624 08:16:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.624 08:16:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.624 08:16:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.624 08:16:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.624 08:16:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.624 08:16:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.624 08:16:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.624 08:16:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:10.624 08:16:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:10.624 08:16:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.624 08:16:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.624 08:16:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:10.624 08:16:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:10.624 08:16:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.624 08:16:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.624 08:16:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.624 08:16:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.624 08:16:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.624 08:16:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.624 08:16:41 -- paths/export.sh@5 -- # export PATH 00:24:10.624 08:16:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.624 08:16:41 -- nvmf/common.sh@46 -- # : 0 00:24:10.624 08:16:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:10.624 08:16:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:10.624 08:16:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:10.624 08:16:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.624 08:16:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.624 08:16:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:10.624 08:16:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:10.624 08:16:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:10.624 08:16:41 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:10.624 08:16:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:10.624 08:16:41 -- common/autotest_common.sh@10 -- # set +x 00:24:18.757 08:16:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:18.757 08:16:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:18.757 08:16:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:18.757 08:16:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:18.757 08:16:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:18.757 08:16:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:18.757 08:16:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:18.757 08:16:48 -- nvmf/common.sh@294 -- # net_devs=() 00:24:18.757 08:16:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:18.757 08:16:48 -- nvmf/common.sh@295 -- # e810=() 00:24:18.757 08:16:48 -- nvmf/common.sh@295 -- # local -ga e810 00:24:18.757 08:16:48 -- nvmf/common.sh@296 -- # x722=() 00:24:18.757 08:16:48 -- nvmf/common.sh@296 -- # local -ga x722 00:24:18.757 08:16:48 -- nvmf/common.sh@297 -- # mlx=() 00:24:18.757 08:16:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:18.757 08:16:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:18.757 08:16:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:18.757 08:16:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:18.757 08:16:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:18.757 08:16:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:18.757 08:16:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:18.757 08:16:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:18.757 08:16:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:18.757 08:16:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:18.757 08:16:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:18.757 08:16:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:18.757 08:16:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:18.757 08:16:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:18.757 08:16:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:18.757 08:16:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:18.757 08:16:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:18.757 08:16:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:18.757 08:16:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:18.757 08:16:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:18.757 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:18.757 08:16:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:18.757 08:16:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:18.757 08:16:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.757 08:16:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.757 08:16:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:18.757 08:16:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:18.757 08:16:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:18.757 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:18.757 08:16:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:18.757 08:16:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:18.757 08:16:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.757 08:16:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.757 08:16:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:18.757 08:16:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:18.757 08:16:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:18.757 08:16:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:18.757 08:16:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:18.757 08:16:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.757 08:16:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:18.757 08:16:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.757 08:16:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:18.757 Found net devices under 0000:31:00.0: cvl_0_0 00:24:18.757 08:16:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.757 08:16:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:18.757 08:16:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.757 08:16:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:18.757 08:16:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.757 08:16:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:18.757 Found net devices under 0000:31:00.1: cvl_0_1 00:24:18.757 08:16:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.757 08:16:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:18.757 08:16:48 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:18.757 08:16:48 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:18.757 08:16:48 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:18.757 08:16:48 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:24:18.757 08:16:48 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:19.326 08:16:49 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:21.262 08:16:51 -- target/perf_adq.sh@54 -- # sleep 5 00:24:26.656 08:16:56 -- target/perf_adq.sh@67 -- # nvmftestinit 00:24:26.656 08:16:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:26.656 08:16:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.656 08:16:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:26.656 08:16:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:26.656 08:16:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:26.656 08:16:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.656 08:16:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:26.656 08:16:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.656 08:16:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:26.656 08:16:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:26.656 08:16:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:26.656 08:16:56 -- common/autotest_common.sh@10 -- # set +x 00:24:26.656 08:16:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:26.656 08:16:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:26.656 08:16:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:26.656 08:16:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:26.656 08:16:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:26.656 08:16:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:26.656 08:16:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:26.656 08:16:56 -- nvmf/common.sh@294 -- # net_devs=() 00:24:26.656 08:16:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:26.656 08:16:56 -- nvmf/common.sh@295 -- # e810=() 00:24:26.656 08:16:56 -- nvmf/common.sh@295 -- # local -ga e810 00:24:26.656 08:16:56 -- nvmf/common.sh@296 -- # x722=() 00:24:26.656 08:16:56 -- nvmf/common.sh@296 -- # local -ga x722 00:24:26.656 08:16:56 -- nvmf/common.sh@297 -- # mlx=() 00:24:26.656 08:16:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:26.656 08:16:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:26.656 08:16:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:26.656 08:16:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:26.656 08:16:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:26.656 08:16:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:26.656 08:16:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:26.656 08:16:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:26.656 08:16:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:26.656 08:16:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:26.656 08:16:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:26.656 08:16:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:26.656 08:16:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:26.656 08:16:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:26.656 08:16:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:26.656 08:16:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:26.656 08:16:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:26.656 08:16:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:26.656 08:16:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:26.656 08:16:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:26.656 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:26.656 08:16:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:26.656 08:16:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:26.657 08:16:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.657 08:16:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.657 08:16:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:26.657 08:16:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:26.657 08:16:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:26.657 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:26.657 08:16:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:26.657 08:16:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:26.657 08:16:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.657 08:16:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.657 08:16:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:26.657 08:16:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:26.657 08:16:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:26.657 08:16:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:26.657 08:16:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:26.657 08:16:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.657 08:16:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:26.657 08:16:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.657 08:16:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:26.657 Found net devices under 0000:31:00.0: cvl_0_0 00:24:26.657 08:16:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.657 08:16:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:26.657 08:16:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.657 08:16:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:26.657 08:16:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.657 08:16:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:26.657 Found net devices under 0000:31:00.1: cvl_0_1 00:24:26.657 08:16:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.657 08:16:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:26.657 08:16:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:26.657 08:16:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:26.657 08:16:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:26.657 08:16:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:26.657 08:16:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:26.657 08:16:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:26.657 08:16:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:26.657 08:16:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:26.657 08:16:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:26.657 08:16:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:26.657 08:16:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:26.657 08:16:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:26.657 08:16:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:26.657 08:16:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:26.657 08:16:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:26.657 08:16:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:26.657 08:16:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:26.657 08:16:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:26.657 08:16:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:26.657 08:16:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:26.657 08:16:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:26.657 08:16:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:26.657 08:16:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:26.657 08:16:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:26.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:24:26.657 00:24:26.657 --- 10.0.0.2 ping statistics --- 00:24:26.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.657 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:24:26.657 08:16:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:26.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:24:26.657 00:24:26.657 --- 10.0.0.1 ping statistics --- 00:24:26.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.657 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:24:26.657 08:16:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.657 08:16:56 -- nvmf/common.sh@410 -- # return 0 00:24:26.657 08:16:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:26.657 08:16:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.657 08:16:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:26.657 08:16:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:26.657 08:16:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.657 08:16:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:26.657 08:16:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:26.657 08:16:56 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:26.657 08:16:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:26.657 08:16:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:26.657 08:16:56 -- common/autotest_common.sh@10 -- # set +x 00:24:26.657 08:16:56 -- nvmf/common.sh@469 -- # nvmfpid=1147552 00:24:26.657 08:16:56 -- nvmf/common.sh@470 -- # waitforlisten 1147552 00:24:26.657 08:16:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:26.657 08:16:56 -- common/autotest_common.sh@819 -- # '[' -z 1147552 ']' 00:24:26.657 08:16:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.657 08:16:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:26.657 08:16:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.657 08:16:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:26.657 08:16:56 -- common/autotest_common.sh@10 -- # set +x 00:24:26.657 [2024-06-11 08:16:56.877984] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:26.657 [2024-06-11 08:16:56.878048] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.657 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.657 [2024-06-11 08:16:56.951510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:26.657 [2024-06-11 08:16:57.025562] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:26.657 [2024-06-11 08:16:57.025691] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.657 [2024-06-11 08:16:57.025701] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.657 [2024-06-11 08:16:57.025709] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.657 [2024-06-11 08:16:57.025901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.657 [2024-06-11 08:16:57.026018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.657 [2024-06-11 08:16:57.026174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.657 [2024-06-11 08:16:57.026175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:27.229 08:16:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:27.229 08:16:57 -- common/autotest_common.sh@852 -- # return 0 00:24:27.229 08:16:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:27.229 08:16:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:27.229 08:16:57 -- common/autotest_common.sh@10 -- # set +x 00:24:27.229 08:16:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.229 08:16:57 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:24:27.229 08:16:57 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:27.229 08:16:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:27.229 08:16:57 -- common/autotest_common.sh@10 -- # set +x 00:24:27.229 08:16:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:27.229 08:16:57 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:27.229 08:16:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:27.229 08:16:57 -- common/autotest_common.sh@10 -- # set +x 00:24:27.229 08:16:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:27.229 08:16:57 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:27.229 08:16:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:27.229 08:16:57 -- common/autotest_common.sh@10 -- # set +x 00:24:27.229 [2024-06-11 08:16:57.793372] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.229 08:16:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:27.229 08:16:57 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:27.229 08:16:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:27.229 08:16:57 -- common/autotest_common.sh@10 -- # set +x 00:24:27.229 Malloc1 00:24:27.229 08:16:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:27.229 08:16:57 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:27.229 08:16:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:27.229 08:16:57 -- common/autotest_common.sh@10 -- # set +x 00:24:27.229 08:16:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:27.229 08:16:57 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:27.229 08:16:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:27.229 08:16:57 -- common/autotest_common.sh@10 -- # set +x 00:24:27.229 08:16:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:27.229 08:16:57 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.229 08:16:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:27.229 08:16:57 -- common/autotest_common.sh@10 -- # set +x 00:24:27.229 [2024-06-11 08:16:57.848740] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.229 08:16:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:27.229 08:16:57 -- target/perf_adq.sh@73 -- # perfpid=1147911 00:24:27.229 08:16:57 -- target/perf_adq.sh@74 -- # sleep 2 00:24:27.229 08:16:57 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:27.489 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.404 08:16:59 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:24:29.404 08:16:59 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:29.404 08:16:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:29.404 08:16:59 -- target/perf_adq.sh@76 -- # wc -l 00:24:29.404 08:16:59 -- common/autotest_common.sh@10 -- # set +x 00:24:29.404 08:16:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:29.404 08:16:59 -- target/perf_adq.sh@76 -- # count=4 00:24:29.404 08:16:59 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:24:29.404 08:16:59 -- target/perf_adq.sh@81 -- # wait 1147911 00:24:37.547 Initializing NVMe Controllers 00:24:37.547 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:37.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:37.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:37.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:37.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:37.547 Initialization complete. Launching workers. 00:24:37.547 ======================================================== 00:24:37.547 Latency(us) 00:24:37.547 Device Information : IOPS MiB/s Average min max 00:24:37.547 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11780.70 46.02 5432.58 1277.45 9107.27 00:24:37.547 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15448.40 60.35 4142.21 1103.72 9589.65 00:24:37.547 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14290.50 55.82 4478.18 1215.72 11612.07 00:24:37.547 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 14985.10 58.54 4270.33 1072.59 10827.85 00:24:37.547 ======================================================== 00:24:37.547 Total : 56504.69 220.72 4530.19 1072.59 11612.07 00:24:37.547 00:24:37.547 08:17:07 -- target/perf_adq.sh@82 -- # nvmftestfini 00:24:37.547 08:17:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:37.547 08:17:07 -- nvmf/common.sh@116 -- # sync 00:24:37.547 08:17:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:37.547 08:17:07 -- nvmf/common.sh@119 -- # set +e 00:24:37.547 08:17:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:37.547 08:17:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:37.547 rmmod nvme_tcp 00:24:37.547 rmmod nvme_fabrics 00:24:37.547 rmmod nvme_keyring 00:24:37.547 08:17:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:37.547 08:17:08 -- nvmf/common.sh@123 -- # set -e 00:24:37.547 08:17:08 -- nvmf/common.sh@124 -- # return 0 00:24:37.547 08:17:08 -- nvmf/common.sh@477 -- # '[' -n 1147552 ']' 00:24:37.547 08:17:08 -- nvmf/common.sh@478 -- # killprocess 1147552 00:24:37.547 08:17:08 -- common/autotest_common.sh@926 -- # '[' -z 1147552 ']' 00:24:37.547 08:17:08 -- common/autotest_common.sh@930 -- # kill -0 1147552 00:24:37.547 08:17:08 -- common/autotest_common.sh@931 -- # uname 00:24:37.547 08:17:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:37.547 08:17:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1147552 00:24:37.547 08:17:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:37.547 08:17:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:37.547 08:17:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1147552' 00:24:37.547 killing process with pid 1147552 00:24:37.547 08:17:08 -- common/autotest_common.sh@945 -- # kill 1147552 00:24:37.547 08:17:08 -- common/autotest_common.sh@950 -- # wait 1147552 00:24:37.809 08:17:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:37.809 08:17:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:37.809 08:17:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:37.809 08:17:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:37.809 08:17:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:37.809 08:17:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.809 08:17:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:37.809 08:17:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.743 08:17:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:39.743 08:17:10 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:24:39.743 08:17:10 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:41.666 08:17:11 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:43.579 08:17:13 -- target/perf_adq.sh@54 -- # sleep 5 00:24:48.865 08:17:18 -- target/perf_adq.sh@87 -- # nvmftestinit 00:24:48.865 08:17:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:48.865 08:17:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.865 08:17:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:48.865 08:17:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:48.865 08:17:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:48.865 08:17:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.865 08:17:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:48.865 08:17:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.865 08:17:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:48.865 08:17:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:48.865 08:17:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:48.865 08:17:18 -- common/autotest_common.sh@10 -- # set +x 00:24:48.865 08:17:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:48.865 08:17:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:48.865 08:17:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:48.865 08:17:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:48.865 08:17:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:48.865 08:17:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:48.865 08:17:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:48.865 08:17:18 -- nvmf/common.sh@294 -- # net_devs=() 00:24:48.865 08:17:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:48.865 08:17:18 -- nvmf/common.sh@295 -- # e810=() 00:24:48.865 08:17:18 -- nvmf/common.sh@295 -- # local -ga e810 00:24:48.865 08:17:18 -- nvmf/common.sh@296 -- # x722=() 00:24:48.865 08:17:18 -- nvmf/common.sh@296 -- # local -ga x722 00:24:48.866 08:17:18 -- nvmf/common.sh@297 -- # mlx=() 00:24:48.866 08:17:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:48.866 08:17:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:48.866 08:17:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:48.866 08:17:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:48.866 08:17:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:48.866 08:17:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:48.866 08:17:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:48.866 08:17:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:48.866 08:17:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:48.866 08:17:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:48.866 08:17:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:48.866 08:17:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:48.866 08:17:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:48.866 08:17:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:48.866 08:17:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:48.866 08:17:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:48.866 08:17:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:48.866 08:17:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:48.866 08:17:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:48.866 08:17:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:48.866 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:48.866 08:17:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:48.866 08:17:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:48.866 08:17:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.866 08:17:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.866 08:17:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:48.866 08:17:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:48.866 08:17:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:48.866 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:48.866 08:17:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:48.866 08:17:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:48.866 08:17:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.866 08:17:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.866 08:17:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:48.866 08:17:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:48.866 08:17:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:48.866 08:17:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:48.866 08:17:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:48.866 08:17:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.866 08:17:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:48.866 08:17:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.866 08:17:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:48.866 Found net devices under 0000:31:00.0: cvl_0_0 00:24:48.866 08:17:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.866 08:17:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:48.866 08:17:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.866 08:17:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:48.866 08:17:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.866 08:17:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:48.866 Found net devices under 0000:31:00.1: cvl_0_1 00:24:48.866 08:17:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.866 08:17:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:48.866 08:17:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:48.866 08:17:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:48.866 08:17:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:48.866 08:17:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:48.866 08:17:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:48.866 08:17:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:48.866 08:17:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:48.866 08:17:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:48.866 08:17:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:48.866 08:17:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:48.866 08:17:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:48.866 08:17:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:48.866 08:17:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:48.866 08:17:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:48.866 08:17:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:48.866 08:17:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:48.866 08:17:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:48.866 08:17:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:48.866 08:17:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:48.866 08:17:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:48.866 08:17:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:48.866 08:17:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:48.866 08:17:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:48.866 08:17:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:48.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:48.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:24:48.866 00:24:48.866 --- 10.0.0.2 ping statistics --- 00:24:48.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.866 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:24:48.866 08:17:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:48.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:48.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:24:48.866 00:24:48.866 --- 10.0.0.1 ping statistics --- 00:24:48.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.866 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:24:48.866 08:17:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:48.866 08:17:19 -- nvmf/common.sh@410 -- # return 0 00:24:48.866 08:17:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:48.866 08:17:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:48.866 08:17:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:48.866 08:17:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:48.866 08:17:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:48.866 08:17:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:48.866 08:17:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:48.866 08:17:19 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:24:48.866 08:17:19 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:48.866 08:17:19 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:48.866 08:17:19 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:48.866 net.core.busy_poll = 1 00:24:48.866 08:17:19 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:48.866 net.core.busy_read = 1 00:24:48.866 08:17:19 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:48.866 08:17:19 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:48.866 08:17:19 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:48.866 08:17:19 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:48.866 08:17:19 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:48.866 08:17:19 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:48.866 08:17:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:48.866 08:17:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:48.866 08:17:19 -- common/autotest_common.sh@10 -- # set +x 00:24:48.866 08:17:19 -- nvmf/common.sh@469 -- # nvmfpid=1152420 00:24:48.866 08:17:19 -- nvmf/common.sh@470 -- # waitforlisten 1152420 00:24:48.866 08:17:19 -- common/autotest_common.sh@819 -- # '[' -z 1152420 ']' 00:24:48.866 08:17:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.866 08:17:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:48.866 08:17:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.866 08:17:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:48.866 08:17:19 -- common/autotest_common.sh@10 -- # set +x 00:24:48.866 08:17:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:48.866 [2024-06-11 08:17:19.428245] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:48.866 [2024-06-11 08:17:19.428306] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.866 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.866 [2024-06-11 08:17:19.499722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:49.126 [2024-06-11 08:17:19.572074] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:49.126 [2024-06-11 08:17:19.572208] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:49.126 [2024-06-11 08:17:19.572218] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:49.126 [2024-06-11 08:17:19.572226] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:49.126 [2024-06-11 08:17:19.572396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.126 [2024-06-11 08:17:19.572531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:49.126 [2024-06-11 08:17:19.572806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:49.126 [2024-06-11 08:17:19.572808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.697 08:17:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:49.697 08:17:20 -- common/autotest_common.sh@852 -- # return 0 00:24:49.697 08:17:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:49.697 08:17:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:49.697 08:17:20 -- common/autotest_common.sh@10 -- # set +x 00:24:49.697 08:17:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:49.697 08:17:20 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:24:49.697 08:17:20 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:49.697 08:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:49.697 08:17:20 -- common/autotest_common.sh@10 -- # set +x 00:24:49.697 08:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:49.697 08:17:20 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:49.697 08:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:49.697 08:17:20 -- common/autotest_common.sh@10 -- # set +x 00:24:49.697 08:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:49.697 08:17:20 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:49.697 08:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:49.697 08:17:20 -- common/autotest_common.sh@10 -- # set +x 00:24:49.697 [2024-06-11 08:17:20.316710] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:49.697 08:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:49.697 08:17:20 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:49.697 08:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:49.697 08:17:20 -- common/autotest_common.sh@10 -- # set +x 00:24:49.957 Malloc1 00:24:49.957 08:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:49.957 08:17:20 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:49.957 08:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:49.957 08:17:20 -- common/autotest_common.sh@10 -- # set +x 00:24:49.957 08:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:49.957 08:17:20 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:49.957 08:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:49.957 08:17:20 -- common/autotest_common.sh@10 -- # set +x 00:24:49.957 08:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:49.957 08:17:20 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:49.957 08:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:49.957 08:17:20 -- common/autotest_common.sh@10 -- # set +x 00:24:49.957 [2024-06-11 08:17:20.372079] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.957 08:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:49.957 08:17:20 -- target/perf_adq.sh@94 -- # perfpid=1152770 00:24:49.957 08:17:20 -- target/perf_adq.sh@95 -- # sleep 2 00:24:49.957 08:17:20 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:49.957 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.868 08:17:22 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:24:51.868 08:17:22 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:51.868 08:17:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:51.868 08:17:22 -- target/perf_adq.sh@97 -- # wc -l 00:24:51.868 08:17:22 -- common/autotest_common.sh@10 -- # set +x 00:24:51.868 08:17:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:51.868 08:17:22 -- target/perf_adq.sh@97 -- # count=2 00:24:51.868 08:17:22 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:24:51.868 08:17:22 -- target/perf_adq.sh@103 -- # wait 1152770 00:25:00.003 Initializing NVMe Controllers 00:25:00.003 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:00.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:00.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:00.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:00.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:00.003 Initialization complete. Launching workers. 00:25:00.003 ======================================================== 00:25:00.003 Latency(us) 00:25:00.003 Device Information : IOPS MiB/s Average min max 00:25:00.003 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 20055.70 78.34 3190.96 974.95 45923.17 00:25:00.003 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7625.36 29.79 8393.89 1136.61 56659.33 00:25:00.003 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6989.37 27.30 9157.40 1379.03 56346.13 00:25:00.003 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7357.56 28.74 8700.24 1168.49 54893.89 00:25:00.003 ======================================================== 00:25:00.003 Total : 42027.99 164.17 6091.66 974.95 56659.33 00:25:00.003 00:25:00.003 08:17:30 -- target/perf_adq.sh@104 -- # nvmftestfini 00:25:00.003 08:17:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:00.003 08:17:30 -- nvmf/common.sh@116 -- # sync 00:25:00.003 08:17:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:00.003 08:17:30 -- nvmf/common.sh@119 -- # set +e 00:25:00.003 08:17:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:00.003 08:17:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:00.003 rmmod nvme_tcp 00:25:00.003 rmmod nvme_fabrics 00:25:00.003 rmmod nvme_keyring 00:25:00.003 08:17:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:00.003 08:17:30 -- nvmf/common.sh@123 -- # set -e 00:25:00.003 08:17:30 -- nvmf/common.sh@124 -- # return 0 00:25:00.003 08:17:30 -- nvmf/common.sh@477 -- # '[' -n 1152420 ']' 00:25:00.003 08:17:30 -- nvmf/common.sh@478 -- # killprocess 1152420 00:25:00.003 08:17:30 -- common/autotest_common.sh@926 -- # '[' -z 1152420 ']' 00:25:00.003 08:17:30 -- common/autotest_common.sh@930 -- # kill -0 1152420 00:25:00.003 08:17:30 -- common/autotest_common.sh@931 -- # uname 00:25:00.003 08:17:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:00.003 08:17:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1152420 00:25:00.263 08:17:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:00.263 08:17:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:00.263 08:17:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1152420' 00:25:00.263 killing process with pid 1152420 00:25:00.263 08:17:30 -- common/autotest_common.sh@945 -- # kill 1152420 00:25:00.263 08:17:30 -- common/autotest_common.sh@950 -- # wait 1152420 00:25:00.263 08:17:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:00.263 08:17:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:00.263 08:17:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:00.263 08:17:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:00.263 08:17:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:00.263 08:17:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.263 08:17:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:00.263 08:17:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.569 08:17:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:03.569 08:17:33 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:25:03.569 00:25:03.569 real 0m52.762s 00:25:03.569 user 2m48.696s 00:25:03.569 sys 0m10.275s 00:25:03.569 08:17:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:03.569 08:17:33 -- common/autotest_common.sh@10 -- # set +x 00:25:03.569 ************************************ 00:25:03.569 END TEST nvmf_perf_adq 00:25:03.569 ************************************ 00:25:03.569 08:17:33 -- nvmf/nvmf.sh@80 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:03.569 08:17:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:03.569 08:17:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:03.569 08:17:33 -- common/autotest_common.sh@10 -- # set +x 00:25:03.569 ************************************ 00:25:03.569 START TEST nvmf_shutdown 00:25:03.569 ************************************ 00:25:03.569 08:17:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:03.569 * Looking for test storage... 00:25:03.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:03.569 08:17:34 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.569 08:17:34 -- nvmf/common.sh@7 -- # uname -s 00:25:03.569 08:17:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.569 08:17:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.569 08:17:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.569 08:17:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.569 08:17:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.569 08:17:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.569 08:17:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.569 08:17:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.569 08:17:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.569 08:17:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.569 08:17:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:03.569 08:17:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:03.569 08:17:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.569 08:17:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.569 08:17:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.569 08:17:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.569 08:17:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.569 08:17:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.569 08:17:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.569 08:17:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.569 08:17:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.569 08:17:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.569 08:17:34 -- paths/export.sh@5 -- # export PATH 00:25:03.569 08:17:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.569 08:17:34 -- nvmf/common.sh@46 -- # : 0 00:25:03.569 08:17:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:03.569 08:17:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:03.569 08:17:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:03.569 08:17:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.569 08:17:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.569 08:17:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:03.569 08:17:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:03.569 08:17:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:03.569 08:17:34 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:03.569 08:17:34 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:03.569 08:17:34 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:03.569 08:17:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:03.569 08:17:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:03.569 08:17:34 -- common/autotest_common.sh@10 -- # set +x 00:25:03.569 ************************************ 00:25:03.569 START TEST nvmf_shutdown_tc1 00:25:03.569 ************************************ 00:25:03.569 08:17:34 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:25:03.569 08:17:34 -- target/shutdown.sh@74 -- # starttarget 00:25:03.569 08:17:34 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:03.569 08:17:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:03.569 08:17:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.569 08:17:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:03.569 08:17:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:03.569 08:17:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:03.569 08:17:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.569 08:17:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.569 08:17:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.569 08:17:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:03.569 08:17:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:03.569 08:17:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:03.569 08:17:34 -- common/autotest_common.sh@10 -- # set +x 00:25:11.710 08:17:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:11.710 08:17:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:11.710 08:17:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:11.710 08:17:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:11.710 08:17:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:11.710 08:17:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:11.710 08:17:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:11.710 08:17:40 -- nvmf/common.sh@294 -- # net_devs=() 00:25:11.710 08:17:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:11.710 08:17:40 -- nvmf/common.sh@295 -- # e810=() 00:25:11.710 08:17:40 -- nvmf/common.sh@295 -- # local -ga e810 00:25:11.710 08:17:40 -- nvmf/common.sh@296 -- # x722=() 00:25:11.710 08:17:40 -- nvmf/common.sh@296 -- # local -ga x722 00:25:11.710 08:17:40 -- nvmf/common.sh@297 -- # mlx=() 00:25:11.710 08:17:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:11.710 08:17:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.710 08:17:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.710 08:17:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.710 08:17:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.710 08:17:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.710 08:17:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.710 08:17:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.710 08:17:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.710 08:17:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.710 08:17:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.710 08:17:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.710 08:17:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:11.710 08:17:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:11.710 08:17:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:11.710 08:17:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:11.710 08:17:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:11.710 08:17:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:11.710 08:17:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:11.710 08:17:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:11.710 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:11.710 08:17:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:11.710 08:17:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:11.710 08:17:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.710 08:17:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.710 08:17:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:11.710 08:17:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:11.710 08:17:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:11.710 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:11.710 08:17:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:11.710 08:17:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:11.710 08:17:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.710 08:17:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.710 08:17:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:11.710 08:17:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:11.710 08:17:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:11.710 08:17:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:11.710 08:17:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:11.710 08:17:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.710 08:17:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:11.710 08:17:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.710 08:17:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:11.710 Found net devices under 0000:31:00.0: cvl_0_0 00:25:11.710 08:17:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.710 08:17:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:11.710 08:17:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.710 08:17:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:11.710 08:17:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.710 08:17:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:11.710 Found net devices under 0000:31:00.1: cvl_0_1 00:25:11.710 08:17:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.710 08:17:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:11.710 08:17:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:11.710 08:17:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:11.710 08:17:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:11.710 08:17:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:11.710 08:17:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:11.710 08:17:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:11.711 08:17:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:11.711 08:17:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:11.711 08:17:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:11.711 08:17:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:11.711 08:17:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:11.711 08:17:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:11.711 08:17:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:11.711 08:17:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:11.711 08:17:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:11.711 08:17:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:11.711 08:17:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:11.711 08:17:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:11.711 08:17:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:11.711 08:17:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:11.711 08:17:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:11.711 08:17:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:11.711 08:17:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:11.711 08:17:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:11.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:11.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:25:11.711 00:25:11.711 --- 10.0.0.2 ping statistics --- 00:25:11.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.711 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:25:11.711 08:17:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:11.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:11.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:25:11.711 00:25:11.711 --- 10.0.0.1 ping statistics --- 00:25:11.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.711 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:25:11.711 08:17:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:11.711 08:17:41 -- nvmf/common.sh@410 -- # return 0 00:25:11.711 08:17:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:11.711 08:17:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:11.711 08:17:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:11.711 08:17:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:11.711 08:17:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:11.711 08:17:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:11.711 08:17:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:11.711 08:17:41 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:11.711 08:17:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:11.711 08:17:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:11.711 08:17:41 -- common/autotest_common.sh@10 -- # set +x 00:25:11.711 08:17:41 -- nvmf/common.sh@469 -- # nvmfpid=1159161 00:25:11.711 08:17:41 -- nvmf/common.sh@470 -- # waitforlisten 1159161 00:25:11.711 08:17:41 -- common/autotest_common.sh@819 -- # '[' -z 1159161 ']' 00:25:11.711 08:17:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.711 08:17:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:11.711 08:17:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.711 08:17:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:11.711 08:17:41 -- common/autotest_common.sh@10 -- # set +x 00:25:11.711 08:17:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:11.711 [2024-06-11 08:17:41.210033] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:11.711 [2024-06-11 08:17:41.210093] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.711 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.711 [2024-06-11 08:17:41.299947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:11.711 [2024-06-11 08:17:41.391715] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:11.711 [2024-06-11 08:17:41.391868] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.711 [2024-06-11 08:17:41.391878] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.711 [2024-06-11 08:17:41.391886] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.711 [2024-06-11 08:17:41.392040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.711 [2024-06-11 08:17:41.392208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:11.711 [2024-06-11 08:17:41.392378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.711 [2024-06-11 08:17:41.392378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:11.711 08:17:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:11.711 08:17:41 -- common/autotest_common.sh@852 -- # return 0 00:25:11.711 08:17:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:11.711 08:17:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:11.711 08:17:41 -- common/autotest_common.sh@10 -- # set +x 00:25:11.711 08:17:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.711 08:17:42 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:11.711 08:17:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:11.711 08:17:42 -- common/autotest_common.sh@10 -- # set +x 00:25:11.711 [2024-06-11 08:17:42.036468] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.711 08:17:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:11.711 08:17:42 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:11.711 08:17:42 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:11.711 08:17:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:11.711 08:17:42 -- common/autotest_common.sh@10 -- # set +x 00:25:11.711 08:17:42 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:11.711 08:17:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:11.711 08:17:42 -- target/shutdown.sh@28 -- # cat 00:25:11.711 08:17:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:11.711 08:17:42 -- target/shutdown.sh@28 -- # cat 00:25:11.711 08:17:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:11.711 08:17:42 -- target/shutdown.sh@28 -- # cat 00:25:11.711 08:17:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:11.711 08:17:42 -- target/shutdown.sh@28 -- # cat 00:25:11.711 08:17:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:11.711 08:17:42 -- target/shutdown.sh@28 -- # cat 00:25:11.711 08:17:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:11.711 08:17:42 -- target/shutdown.sh@28 -- # cat 00:25:11.711 08:17:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:11.711 08:17:42 -- target/shutdown.sh@28 -- # cat 00:25:11.711 08:17:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:11.711 08:17:42 -- target/shutdown.sh@28 -- # cat 00:25:11.711 08:17:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:11.711 08:17:42 -- target/shutdown.sh@28 -- # cat 00:25:11.711 08:17:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:11.711 08:17:42 -- target/shutdown.sh@28 -- # cat 00:25:11.711 08:17:42 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:11.711 08:17:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:11.711 08:17:42 -- common/autotest_common.sh@10 -- # set +x 00:25:11.711 Malloc1 00:25:11.711 [2024-06-11 08:17:42.139964] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.711 Malloc2 00:25:11.711 Malloc3 00:25:11.711 Malloc4 00:25:11.711 Malloc5 00:25:11.711 Malloc6 00:25:11.711 Malloc7 00:25:11.972 Malloc8 00:25:11.972 Malloc9 00:25:11.972 Malloc10 00:25:11.972 08:17:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:11.972 08:17:42 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:11.972 08:17:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:11.972 08:17:42 -- common/autotest_common.sh@10 -- # set +x 00:25:11.972 08:17:42 -- target/shutdown.sh@78 -- # perfpid=1159387 00:25:11.972 08:17:42 -- target/shutdown.sh@79 -- # waitforlisten 1159387 /var/tmp/bdevperf.sock 00:25:11.972 08:17:42 -- common/autotest_common.sh@819 -- # '[' -z 1159387 ']' 00:25:11.972 08:17:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:11.972 08:17:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:11.972 08:17:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:11.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:11.972 08:17:42 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:11.972 08:17:42 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:11.972 08:17:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:11.972 08:17:42 -- common/autotest_common.sh@10 -- # set +x 00:25:11.972 08:17:42 -- nvmf/common.sh@520 -- # config=() 00:25:11.972 08:17:42 -- nvmf/common.sh@520 -- # local subsystem config 00:25:11.972 08:17:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:11.972 08:17:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:11.972 { 00:25:11.972 "params": { 00:25:11.972 "name": "Nvme$subsystem", 00:25:11.972 "trtype": "$TEST_TRANSPORT", 00:25:11.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.973 "adrfam": "ipv4", 00:25:11.973 "trsvcid": "$NVMF_PORT", 00:25:11.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.973 "hdgst": ${hdgst:-false}, 00:25:11.973 "ddgst": ${ddgst:-false} 00:25:11.973 }, 00:25:11.973 "method": "bdev_nvme_attach_controller" 00:25:11.973 } 00:25:11.973 EOF 00:25:11.973 )") 00:25:11.973 08:17:42 -- nvmf/common.sh@542 -- # cat 00:25:11.973 08:17:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:11.973 08:17:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:11.973 { 00:25:11.973 "params": { 00:25:11.973 "name": "Nvme$subsystem", 00:25:11.973 "trtype": "$TEST_TRANSPORT", 00:25:11.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.973 "adrfam": "ipv4", 00:25:11.973 "trsvcid": "$NVMF_PORT", 00:25:11.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.973 "hdgst": ${hdgst:-false}, 00:25:11.973 "ddgst": ${ddgst:-false} 00:25:11.973 }, 00:25:11.973 "method": "bdev_nvme_attach_controller" 00:25:11.973 } 00:25:11.973 EOF 00:25:11.973 )") 00:25:11.973 08:17:42 -- nvmf/common.sh@542 -- # cat 00:25:11.973 08:17:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:11.973 08:17:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:11.973 { 00:25:11.973 "params": { 00:25:11.973 "name": "Nvme$subsystem", 00:25:11.973 "trtype": "$TEST_TRANSPORT", 00:25:11.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.973 "adrfam": "ipv4", 00:25:11.973 "trsvcid": "$NVMF_PORT", 00:25:11.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.973 "hdgst": ${hdgst:-false}, 00:25:11.973 "ddgst": ${ddgst:-false} 00:25:11.973 }, 00:25:11.973 "method": "bdev_nvme_attach_controller" 00:25:11.973 } 00:25:11.973 EOF 00:25:11.973 )") 00:25:11.973 08:17:42 -- nvmf/common.sh@542 -- # cat 00:25:11.973 08:17:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:11.973 08:17:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:11.973 { 00:25:11.973 "params": { 00:25:11.973 "name": "Nvme$subsystem", 00:25:11.973 "trtype": "$TEST_TRANSPORT", 00:25:11.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.973 "adrfam": "ipv4", 00:25:11.973 "trsvcid": "$NVMF_PORT", 00:25:11.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.973 "hdgst": ${hdgst:-false}, 00:25:11.973 "ddgst": ${ddgst:-false} 00:25:11.973 }, 00:25:11.973 "method": "bdev_nvme_attach_controller" 00:25:11.973 } 00:25:11.973 EOF 00:25:11.973 )") 00:25:11.973 08:17:42 -- nvmf/common.sh@542 -- # cat 00:25:11.973 08:17:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:11.973 08:17:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:11.973 { 00:25:11.973 "params": { 00:25:11.973 "name": "Nvme$subsystem", 00:25:11.973 "trtype": "$TEST_TRANSPORT", 00:25:11.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.973 "adrfam": "ipv4", 00:25:11.973 "trsvcid": "$NVMF_PORT", 00:25:11.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.973 "hdgst": ${hdgst:-false}, 00:25:11.973 "ddgst": ${ddgst:-false} 00:25:11.973 }, 00:25:11.973 "method": "bdev_nvme_attach_controller" 00:25:11.973 } 00:25:11.973 EOF 00:25:11.973 )") 00:25:11.973 08:17:42 -- nvmf/common.sh@542 -- # cat 00:25:11.973 08:17:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:11.973 08:17:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:11.973 { 00:25:11.973 "params": { 00:25:11.973 "name": "Nvme$subsystem", 00:25:11.973 "trtype": "$TEST_TRANSPORT", 00:25:11.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.973 "adrfam": "ipv4", 00:25:11.973 "trsvcid": "$NVMF_PORT", 00:25:11.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.973 "hdgst": ${hdgst:-false}, 00:25:11.973 "ddgst": ${ddgst:-false} 00:25:11.973 }, 00:25:11.973 "method": "bdev_nvme_attach_controller" 00:25:11.973 } 00:25:11.973 EOF 00:25:11.973 )") 00:25:11.973 08:17:42 -- nvmf/common.sh@542 -- # cat 00:25:11.973 [2024-06-11 08:17:42.585511] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:11.973 [2024-06-11 08:17:42.585564] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:11.973 08:17:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:11.973 08:17:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:11.973 { 00:25:11.973 "params": { 00:25:11.973 "name": "Nvme$subsystem", 00:25:11.973 "trtype": "$TEST_TRANSPORT", 00:25:11.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.973 "adrfam": "ipv4", 00:25:11.973 "trsvcid": "$NVMF_PORT", 00:25:11.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.973 "hdgst": ${hdgst:-false}, 00:25:11.973 "ddgst": ${ddgst:-false} 00:25:11.973 }, 00:25:11.973 "method": "bdev_nvme_attach_controller" 00:25:11.973 } 00:25:11.973 EOF 00:25:11.973 )") 00:25:11.973 08:17:42 -- nvmf/common.sh@542 -- # cat 00:25:11.973 08:17:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:11.973 08:17:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:11.973 { 00:25:11.973 "params": { 00:25:11.973 "name": "Nvme$subsystem", 00:25:11.973 "trtype": "$TEST_TRANSPORT", 00:25:11.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.973 "adrfam": "ipv4", 00:25:11.973 "trsvcid": "$NVMF_PORT", 00:25:11.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.973 "hdgst": ${hdgst:-false}, 00:25:11.973 "ddgst": ${ddgst:-false} 00:25:11.973 }, 00:25:11.973 "method": "bdev_nvme_attach_controller" 00:25:11.973 } 00:25:11.973 EOF 00:25:11.973 )") 00:25:11.973 08:17:42 -- nvmf/common.sh@542 -- # cat 00:25:11.973 08:17:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:11.973 08:17:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:11.973 { 00:25:11.973 "params": { 00:25:11.973 "name": "Nvme$subsystem", 00:25:11.973 "trtype": "$TEST_TRANSPORT", 00:25:11.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.973 "adrfam": "ipv4", 00:25:11.973 "trsvcid": "$NVMF_PORT", 00:25:11.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.973 "hdgst": ${hdgst:-false}, 00:25:11.973 "ddgst": ${ddgst:-false} 00:25:11.973 }, 00:25:11.973 "method": "bdev_nvme_attach_controller" 00:25:11.973 } 00:25:11.973 EOF 00:25:11.973 )") 00:25:11.973 08:17:42 -- nvmf/common.sh@542 -- # cat 00:25:11.973 08:17:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:11.973 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.973 08:17:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:11.973 { 00:25:11.973 "params": { 00:25:11.973 "name": "Nvme$subsystem", 00:25:11.973 "trtype": "$TEST_TRANSPORT", 00:25:11.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.973 "adrfam": "ipv4", 00:25:11.973 "trsvcid": "$NVMF_PORT", 00:25:11.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.973 "hdgst": ${hdgst:-false}, 00:25:11.973 "ddgst": ${ddgst:-false} 00:25:11.973 }, 00:25:11.973 "method": "bdev_nvme_attach_controller" 00:25:11.973 } 00:25:11.973 EOF 00:25:11.973 )") 00:25:11.973 08:17:42 -- nvmf/common.sh@542 -- # cat 00:25:12.235 08:17:42 -- nvmf/common.sh@544 -- # jq . 00:25:12.235 08:17:42 -- nvmf/common.sh@545 -- # IFS=, 00:25:12.235 08:17:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:12.235 "params": { 00:25:12.235 "name": "Nvme1", 00:25:12.235 "trtype": "tcp", 00:25:12.235 "traddr": "10.0.0.2", 00:25:12.235 "adrfam": "ipv4", 00:25:12.235 "trsvcid": "4420", 00:25:12.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:12.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:12.235 "hdgst": false, 00:25:12.235 "ddgst": false 00:25:12.235 }, 00:25:12.235 "method": "bdev_nvme_attach_controller" 00:25:12.235 },{ 00:25:12.235 "params": { 00:25:12.235 "name": "Nvme2", 00:25:12.235 "trtype": "tcp", 00:25:12.235 "traddr": "10.0.0.2", 00:25:12.235 "adrfam": "ipv4", 00:25:12.235 "trsvcid": "4420", 00:25:12.235 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:12.235 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:12.235 "hdgst": false, 00:25:12.235 "ddgst": false 00:25:12.235 }, 00:25:12.235 "method": "bdev_nvme_attach_controller" 00:25:12.235 },{ 00:25:12.235 "params": { 00:25:12.235 "name": "Nvme3", 00:25:12.235 "trtype": "tcp", 00:25:12.235 "traddr": "10.0.0.2", 00:25:12.235 "adrfam": "ipv4", 00:25:12.235 "trsvcid": "4420", 00:25:12.235 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:12.235 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:12.235 "hdgst": false, 00:25:12.235 "ddgst": false 00:25:12.235 }, 00:25:12.235 "method": "bdev_nvme_attach_controller" 00:25:12.235 },{ 00:25:12.235 "params": { 00:25:12.235 "name": "Nvme4", 00:25:12.235 "trtype": "tcp", 00:25:12.235 "traddr": "10.0.0.2", 00:25:12.235 "adrfam": "ipv4", 00:25:12.235 "trsvcid": "4420", 00:25:12.235 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:12.235 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:12.235 "hdgst": false, 00:25:12.235 "ddgst": false 00:25:12.235 }, 00:25:12.235 "method": "bdev_nvme_attach_controller" 00:25:12.235 },{ 00:25:12.235 "params": { 00:25:12.235 "name": "Nvme5", 00:25:12.235 "trtype": "tcp", 00:25:12.235 "traddr": "10.0.0.2", 00:25:12.235 "adrfam": "ipv4", 00:25:12.235 "trsvcid": "4420", 00:25:12.235 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:12.235 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:12.235 "hdgst": false, 00:25:12.235 "ddgst": false 00:25:12.235 }, 00:25:12.235 "method": "bdev_nvme_attach_controller" 00:25:12.235 },{ 00:25:12.235 "params": { 00:25:12.235 "name": "Nvme6", 00:25:12.235 "trtype": "tcp", 00:25:12.235 "traddr": "10.0.0.2", 00:25:12.235 "adrfam": "ipv4", 00:25:12.235 "trsvcid": "4420", 00:25:12.235 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:12.235 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:12.235 "hdgst": false, 00:25:12.235 "ddgst": false 00:25:12.235 }, 00:25:12.235 "method": "bdev_nvme_attach_controller" 00:25:12.235 },{ 00:25:12.235 "params": { 00:25:12.235 "name": "Nvme7", 00:25:12.235 "trtype": "tcp", 00:25:12.235 "traddr": "10.0.0.2", 00:25:12.235 "adrfam": "ipv4", 00:25:12.235 "trsvcid": "4420", 00:25:12.235 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:12.235 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:12.235 "hdgst": false, 00:25:12.235 "ddgst": false 00:25:12.235 }, 00:25:12.235 "method": "bdev_nvme_attach_controller" 00:25:12.235 },{ 00:25:12.235 "params": { 00:25:12.235 "name": "Nvme8", 00:25:12.235 "trtype": "tcp", 00:25:12.235 "traddr": "10.0.0.2", 00:25:12.235 "adrfam": "ipv4", 00:25:12.235 "trsvcid": "4420", 00:25:12.235 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:12.235 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:12.235 "hdgst": false, 00:25:12.235 "ddgst": false 00:25:12.235 }, 00:25:12.235 "method": "bdev_nvme_attach_controller" 00:25:12.235 },{ 00:25:12.235 "params": { 00:25:12.235 "name": "Nvme9", 00:25:12.235 "trtype": "tcp", 00:25:12.235 "traddr": "10.0.0.2", 00:25:12.235 "adrfam": "ipv4", 00:25:12.235 "trsvcid": "4420", 00:25:12.235 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:12.235 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:12.235 "hdgst": false, 00:25:12.235 "ddgst": false 00:25:12.235 }, 00:25:12.235 "method": "bdev_nvme_attach_controller" 00:25:12.235 },{ 00:25:12.235 "params": { 00:25:12.235 "name": "Nvme10", 00:25:12.235 "trtype": "tcp", 00:25:12.235 "traddr": "10.0.0.2", 00:25:12.235 "adrfam": "ipv4", 00:25:12.235 "trsvcid": "4420", 00:25:12.235 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:12.235 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:12.235 "hdgst": false, 00:25:12.235 "ddgst": false 00:25:12.235 }, 00:25:12.235 "method": "bdev_nvme_attach_controller" 00:25:12.235 }' 00:25:12.235 [2024-06-11 08:17:42.646981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.235 [2024-06-11 08:17:42.710247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.620 08:17:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:13.620 08:17:43 -- common/autotest_common.sh@852 -- # return 0 00:25:13.620 08:17:43 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:13.620 08:17:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.620 08:17:43 -- common/autotest_common.sh@10 -- # set +x 00:25:13.620 08:17:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.620 08:17:44 -- target/shutdown.sh@83 -- # kill -9 1159387 00:25:13.620 08:17:44 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:25:13.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1159387 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:13.620 08:17:44 -- target/shutdown.sh@87 -- # sleep 1 00:25:14.563 08:17:45 -- target/shutdown.sh@88 -- # kill -0 1159161 00:25:14.563 08:17:45 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:14.563 08:17:45 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:14.563 08:17:45 -- nvmf/common.sh@520 -- # config=() 00:25:14.563 08:17:45 -- nvmf/common.sh@520 -- # local subsystem config 00:25:14.563 08:17:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:14.563 08:17:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:14.563 { 00:25:14.563 "params": { 00:25:14.563 "name": "Nvme$subsystem", 00:25:14.563 "trtype": "$TEST_TRANSPORT", 00:25:14.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.563 "adrfam": "ipv4", 00:25:14.563 "trsvcid": "$NVMF_PORT", 00:25:14.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.563 "hdgst": ${hdgst:-false}, 00:25:14.563 "ddgst": ${ddgst:-false} 00:25:14.563 }, 00:25:14.563 "method": "bdev_nvme_attach_controller" 00:25:14.563 } 00:25:14.563 EOF 00:25:14.563 )") 00:25:14.563 08:17:45 -- nvmf/common.sh@542 -- # cat 00:25:14.563 08:17:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:14.563 08:17:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:14.563 { 00:25:14.563 "params": { 00:25:14.563 "name": "Nvme$subsystem", 00:25:14.563 "trtype": "$TEST_TRANSPORT", 00:25:14.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.563 "adrfam": "ipv4", 00:25:14.563 "trsvcid": "$NVMF_PORT", 00:25:14.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.563 "hdgst": ${hdgst:-false}, 00:25:14.563 "ddgst": ${ddgst:-false} 00:25:14.563 }, 00:25:14.563 "method": "bdev_nvme_attach_controller" 00:25:14.563 } 00:25:14.563 EOF 00:25:14.563 )") 00:25:14.563 08:17:45 -- nvmf/common.sh@542 -- # cat 00:25:14.563 08:17:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:14.563 08:17:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:14.563 { 00:25:14.563 "params": { 00:25:14.563 "name": "Nvme$subsystem", 00:25:14.563 "trtype": "$TEST_TRANSPORT", 00:25:14.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.563 "adrfam": "ipv4", 00:25:14.563 "trsvcid": "$NVMF_PORT", 00:25:14.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.563 "hdgst": ${hdgst:-false}, 00:25:14.563 "ddgst": ${ddgst:-false} 00:25:14.563 }, 00:25:14.563 "method": "bdev_nvme_attach_controller" 00:25:14.563 } 00:25:14.563 EOF 00:25:14.563 )") 00:25:14.563 08:17:45 -- nvmf/common.sh@542 -- # cat 00:25:14.563 08:17:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:14.563 08:17:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:14.563 { 00:25:14.563 "params": { 00:25:14.563 "name": "Nvme$subsystem", 00:25:14.563 "trtype": "$TEST_TRANSPORT", 00:25:14.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.563 "adrfam": "ipv4", 00:25:14.563 "trsvcid": "$NVMF_PORT", 00:25:14.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.563 "hdgst": ${hdgst:-false}, 00:25:14.563 "ddgst": ${ddgst:-false} 00:25:14.563 }, 00:25:14.563 "method": "bdev_nvme_attach_controller" 00:25:14.563 } 00:25:14.563 EOF 00:25:14.563 )") 00:25:14.563 08:17:45 -- nvmf/common.sh@542 -- # cat 00:25:14.563 08:17:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:14.563 08:17:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:14.563 { 00:25:14.563 "params": { 00:25:14.563 "name": "Nvme$subsystem", 00:25:14.563 "trtype": "$TEST_TRANSPORT", 00:25:14.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.563 "adrfam": "ipv4", 00:25:14.563 "trsvcid": "$NVMF_PORT", 00:25:14.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.563 "hdgst": ${hdgst:-false}, 00:25:14.563 "ddgst": ${ddgst:-false} 00:25:14.563 }, 00:25:14.563 "method": "bdev_nvme_attach_controller" 00:25:14.563 } 00:25:14.563 EOF 00:25:14.563 )") 00:25:14.563 08:17:45 -- nvmf/common.sh@542 -- # cat 00:25:14.563 08:17:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:14.563 08:17:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:14.563 { 00:25:14.563 "params": { 00:25:14.563 "name": "Nvme$subsystem", 00:25:14.563 "trtype": "$TEST_TRANSPORT", 00:25:14.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.563 "adrfam": "ipv4", 00:25:14.563 "trsvcid": "$NVMF_PORT", 00:25:14.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.563 "hdgst": ${hdgst:-false}, 00:25:14.563 "ddgst": ${ddgst:-false} 00:25:14.563 }, 00:25:14.563 "method": "bdev_nvme_attach_controller" 00:25:14.563 } 00:25:14.563 EOF 00:25:14.563 )") 00:25:14.563 08:17:45 -- nvmf/common.sh@542 -- # cat 00:25:14.563 [2024-06-11 08:17:45.061385] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:14.563 [2024-06-11 08:17:45.061445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160086 ] 00:25:14.563 08:17:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:14.563 08:17:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:14.563 { 00:25:14.563 "params": { 00:25:14.563 "name": "Nvme$subsystem", 00:25:14.563 "trtype": "$TEST_TRANSPORT", 00:25:14.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.563 "adrfam": "ipv4", 00:25:14.563 "trsvcid": "$NVMF_PORT", 00:25:14.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.563 "hdgst": ${hdgst:-false}, 00:25:14.563 "ddgst": ${ddgst:-false} 00:25:14.563 }, 00:25:14.563 "method": "bdev_nvme_attach_controller" 00:25:14.563 } 00:25:14.563 EOF 00:25:14.563 )") 00:25:14.563 08:17:45 -- nvmf/common.sh@542 -- # cat 00:25:14.563 08:17:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:14.563 08:17:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:14.563 { 00:25:14.563 "params": { 00:25:14.563 "name": "Nvme$subsystem", 00:25:14.563 "trtype": "$TEST_TRANSPORT", 00:25:14.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.563 "adrfam": "ipv4", 00:25:14.563 "trsvcid": "$NVMF_PORT", 00:25:14.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.563 "hdgst": ${hdgst:-false}, 00:25:14.563 "ddgst": ${ddgst:-false} 00:25:14.563 }, 00:25:14.563 "method": "bdev_nvme_attach_controller" 00:25:14.563 } 00:25:14.563 EOF 00:25:14.563 )") 00:25:14.563 08:17:45 -- nvmf/common.sh@542 -- # cat 00:25:14.563 08:17:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:14.563 08:17:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:14.563 { 00:25:14.563 "params": { 00:25:14.563 "name": "Nvme$subsystem", 00:25:14.563 "trtype": "$TEST_TRANSPORT", 00:25:14.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.563 "adrfam": "ipv4", 00:25:14.563 "trsvcid": "$NVMF_PORT", 00:25:14.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.563 "hdgst": ${hdgst:-false}, 00:25:14.563 "ddgst": ${ddgst:-false} 00:25:14.563 }, 00:25:14.563 "method": "bdev_nvme_attach_controller" 00:25:14.563 } 00:25:14.563 EOF 00:25:14.563 )") 00:25:14.563 08:17:45 -- nvmf/common.sh@542 -- # cat 00:25:14.563 08:17:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:14.563 08:17:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:14.563 { 00:25:14.563 "params": { 00:25:14.563 "name": "Nvme$subsystem", 00:25:14.563 "trtype": "$TEST_TRANSPORT", 00:25:14.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.563 "adrfam": "ipv4", 00:25:14.563 "trsvcid": "$NVMF_PORT", 00:25:14.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.564 "hdgst": ${hdgst:-false}, 00:25:14.564 "ddgst": ${ddgst:-false} 00:25:14.564 }, 00:25:14.564 "method": "bdev_nvme_attach_controller" 00:25:14.564 } 00:25:14.564 EOF 00:25:14.564 )") 00:25:14.564 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.564 08:17:45 -- nvmf/common.sh@542 -- # cat 00:25:14.564 08:17:45 -- nvmf/common.sh@544 -- # jq . 00:25:14.564 08:17:45 -- nvmf/common.sh@545 -- # IFS=, 00:25:14.564 08:17:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:14.564 "params": { 00:25:14.564 "name": "Nvme1", 00:25:14.564 "trtype": "tcp", 00:25:14.564 "traddr": "10.0.0.2", 00:25:14.564 "adrfam": "ipv4", 00:25:14.564 "trsvcid": "4420", 00:25:14.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:14.564 "hdgst": false, 00:25:14.564 "ddgst": false 00:25:14.564 }, 00:25:14.564 "method": "bdev_nvme_attach_controller" 00:25:14.564 },{ 00:25:14.564 "params": { 00:25:14.564 "name": "Nvme2", 00:25:14.564 "trtype": "tcp", 00:25:14.564 "traddr": "10.0.0.2", 00:25:14.564 "adrfam": "ipv4", 00:25:14.564 "trsvcid": "4420", 00:25:14.564 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:14.564 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:14.564 "hdgst": false, 00:25:14.564 "ddgst": false 00:25:14.564 }, 00:25:14.564 "method": "bdev_nvme_attach_controller" 00:25:14.564 },{ 00:25:14.564 "params": { 00:25:14.564 "name": "Nvme3", 00:25:14.564 "trtype": "tcp", 00:25:14.564 "traddr": "10.0.0.2", 00:25:14.564 "adrfam": "ipv4", 00:25:14.564 "trsvcid": "4420", 00:25:14.564 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:14.564 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:14.564 "hdgst": false, 00:25:14.564 "ddgst": false 00:25:14.564 }, 00:25:14.564 "method": "bdev_nvme_attach_controller" 00:25:14.564 },{ 00:25:14.564 "params": { 00:25:14.564 "name": "Nvme4", 00:25:14.564 "trtype": "tcp", 00:25:14.564 "traddr": "10.0.0.2", 00:25:14.564 "adrfam": "ipv4", 00:25:14.564 "trsvcid": "4420", 00:25:14.564 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:14.564 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:14.564 "hdgst": false, 00:25:14.564 "ddgst": false 00:25:14.564 }, 00:25:14.564 "method": "bdev_nvme_attach_controller" 00:25:14.564 },{ 00:25:14.564 "params": { 00:25:14.564 "name": "Nvme5", 00:25:14.564 "trtype": "tcp", 00:25:14.564 "traddr": "10.0.0.2", 00:25:14.564 "adrfam": "ipv4", 00:25:14.564 "trsvcid": "4420", 00:25:14.564 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:14.564 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:14.564 "hdgst": false, 00:25:14.564 "ddgst": false 00:25:14.564 }, 00:25:14.564 "method": "bdev_nvme_attach_controller" 00:25:14.564 },{ 00:25:14.564 "params": { 00:25:14.564 "name": "Nvme6", 00:25:14.564 "trtype": "tcp", 00:25:14.564 "traddr": "10.0.0.2", 00:25:14.564 "adrfam": "ipv4", 00:25:14.564 "trsvcid": "4420", 00:25:14.564 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:14.564 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:14.564 "hdgst": false, 00:25:14.564 "ddgst": false 00:25:14.564 }, 00:25:14.564 "method": "bdev_nvme_attach_controller" 00:25:14.564 },{ 00:25:14.564 "params": { 00:25:14.564 "name": "Nvme7", 00:25:14.564 "trtype": "tcp", 00:25:14.564 "traddr": "10.0.0.2", 00:25:14.564 "adrfam": "ipv4", 00:25:14.564 "trsvcid": "4420", 00:25:14.564 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:14.564 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:14.564 "hdgst": false, 00:25:14.564 "ddgst": false 00:25:14.564 }, 00:25:14.564 "method": "bdev_nvme_attach_controller" 00:25:14.564 },{ 00:25:14.564 "params": { 00:25:14.564 "name": "Nvme8", 00:25:14.564 "trtype": "tcp", 00:25:14.564 "traddr": "10.0.0.2", 00:25:14.564 "adrfam": "ipv4", 00:25:14.564 "trsvcid": "4420", 00:25:14.564 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:14.564 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:14.564 "hdgst": false, 00:25:14.564 "ddgst": false 00:25:14.564 }, 00:25:14.564 "method": "bdev_nvme_attach_controller" 00:25:14.564 },{ 00:25:14.564 "params": { 00:25:14.564 "name": "Nvme9", 00:25:14.564 "trtype": "tcp", 00:25:14.564 "traddr": "10.0.0.2", 00:25:14.564 "adrfam": "ipv4", 00:25:14.564 "trsvcid": "4420", 00:25:14.564 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:14.564 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:14.564 "hdgst": false, 00:25:14.564 "ddgst": false 00:25:14.564 }, 00:25:14.564 "method": "bdev_nvme_attach_controller" 00:25:14.564 },{ 00:25:14.564 "params": { 00:25:14.564 "name": "Nvme10", 00:25:14.564 "trtype": "tcp", 00:25:14.564 "traddr": "10.0.0.2", 00:25:14.564 "adrfam": "ipv4", 00:25:14.564 "trsvcid": "4420", 00:25:14.564 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:14.564 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:14.564 "hdgst": false, 00:25:14.564 "ddgst": false 00:25:14.564 }, 00:25:14.564 "method": "bdev_nvme_attach_controller" 00:25:14.564 }' 00:25:14.564 [2024-06-11 08:17:45.122721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.564 [2024-06-11 08:17:45.184514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.948 Running I/O for 1 seconds... 00:25:17.334 00:25:17.334 Latency(us) 00:25:17.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.334 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.334 Verification LBA range: start 0x0 length 0x400 00:25:17.334 Nvme1n1 : 1.05 413.40 25.84 0.00 0.00 151800.28 19333.12 141557.76 00:25:17.334 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.334 Verification LBA range: start 0x0 length 0x400 00:25:17.334 Nvme2n1 : 1.06 408.65 25.54 0.00 0.00 151574.02 27634.35 123207.68 00:25:17.334 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.334 Verification LBA range: start 0x0 length 0x400 00:25:17.334 Nvme3n1 : 1.09 441.13 27.57 0.00 0.00 140823.36 17803.95 120586.24 00:25:17.334 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.334 Verification LBA range: start 0x0 length 0x400 00:25:17.334 Nvme4n1 : 1.09 442.44 27.65 0.00 0.00 139868.42 11578.03 117090.99 00:25:17.334 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.334 Verification LBA range: start 0x0 length 0x400 00:25:17.334 Nvme5n1 : 1.13 426.76 26.67 0.00 0.00 139160.94 11523.41 110974.29 00:25:17.334 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.334 Verification LBA range: start 0x0 length 0x400 00:25:17.334 Nvme6n1 : 1.08 402.77 25.17 0.00 0.00 149019.15 27743.57 132819.63 00:25:17.334 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.334 Verification LBA range: start 0x0 length 0x400 00:25:17.334 Nvme7n1 : 1.10 440.04 27.50 0.00 0.00 137268.63 14745.60 111411.20 00:25:17.334 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.334 Verification LBA range: start 0x0 length 0x400 00:25:17.334 Nvme8n1 : 1.10 438.68 27.42 0.00 0.00 136815.95 13598.72 115343.36 00:25:17.334 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.334 Verification LBA range: start 0x0 length 0x400 00:25:17.334 Nvme9n1 : 1.10 438.11 27.38 0.00 0.00 136135.15 12014.93 120586.24 00:25:17.334 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.334 Verification LBA range: start 0x0 length 0x400 00:25:17.334 Nvme10n1 : 1.10 438.17 27.39 0.00 0.00 135164.90 10212.69 123207.68 00:25:17.334 =================================================================================================================== 00:25:17.334 Total : 4290.14 268.13 0.00 0.00 141478.13 10212.69 141557.76 00:25:17.334 08:17:47 -- target/shutdown.sh@93 -- # stoptarget 00:25:17.334 08:17:47 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:17.334 08:17:47 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:17.334 08:17:47 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:17.334 08:17:47 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:17.334 08:17:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:17.334 08:17:47 -- nvmf/common.sh@116 -- # sync 00:25:17.334 08:17:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:17.334 08:17:47 -- nvmf/common.sh@119 -- # set +e 00:25:17.334 08:17:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:17.334 08:17:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:17.334 rmmod nvme_tcp 00:25:17.334 rmmod nvme_fabrics 00:25:17.334 rmmod nvme_keyring 00:25:17.334 08:17:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:17.334 08:17:47 -- nvmf/common.sh@123 -- # set -e 00:25:17.334 08:17:47 -- nvmf/common.sh@124 -- # return 0 00:25:17.334 08:17:47 -- nvmf/common.sh@477 -- # '[' -n 1159161 ']' 00:25:17.334 08:17:47 -- nvmf/common.sh@478 -- # killprocess 1159161 00:25:17.334 08:17:47 -- common/autotest_common.sh@926 -- # '[' -z 1159161 ']' 00:25:17.334 08:17:47 -- common/autotest_common.sh@930 -- # kill -0 1159161 00:25:17.334 08:17:47 -- common/autotest_common.sh@931 -- # uname 00:25:17.334 08:17:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:17.334 08:17:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1159161 00:25:17.334 08:17:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:17.334 08:17:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:17.334 08:17:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1159161' 00:25:17.334 killing process with pid 1159161 00:25:17.334 08:17:47 -- common/autotest_common.sh@945 -- # kill 1159161 00:25:17.334 08:17:47 -- common/autotest_common.sh@950 -- # wait 1159161 00:25:17.595 08:17:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:17.595 08:17:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:17.595 08:17:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:17.595 08:17:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:17.595 08:17:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:17.595 08:17:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.595 08:17:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:17.595 08:17:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.565 08:17:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:19.565 00:25:19.565 real 0m16.081s 00:25:19.565 user 0m32.617s 00:25:19.565 sys 0m6.380s 00:25:19.565 08:17:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:19.565 08:17:50 -- common/autotest_common.sh@10 -- # set +x 00:25:19.565 ************************************ 00:25:19.565 END TEST nvmf_shutdown_tc1 00:25:19.565 ************************************ 00:25:19.565 08:17:50 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:19.565 08:17:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:19.565 08:17:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:19.565 08:17:50 -- common/autotest_common.sh@10 -- # set +x 00:25:19.565 ************************************ 00:25:19.565 START TEST nvmf_shutdown_tc2 00:25:19.565 ************************************ 00:25:19.565 08:17:50 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:25:19.565 08:17:50 -- target/shutdown.sh@98 -- # starttarget 00:25:19.565 08:17:50 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:19.565 08:17:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:19.565 08:17:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.565 08:17:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:19.565 08:17:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:19.565 08:17:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:19.565 08:17:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.565 08:17:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:19.565 08:17:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.565 08:17:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:19.565 08:17:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:19.565 08:17:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:19.565 08:17:50 -- common/autotest_common.sh@10 -- # set +x 00:25:19.565 08:17:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:19.565 08:17:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:19.565 08:17:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:19.827 08:17:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:19.827 08:17:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:19.827 08:17:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:19.827 08:17:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:19.827 08:17:50 -- nvmf/common.sh@294 -- # net_devs=() 00:25:19.827 08:17:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:19.827 08:17:50 -- nvmf/common.sh@295 -- # e810=() 00:25:19.827 08:17:50 -- nvmf/common.sh@295 -- # local -ga e810 00:25:19.827 08:17:50 -- nvmf/common.sh@296 -- # x722=() 00:25:19.827 08:17:50 -- nvmf/common.sh@296 -- # local -ga x722 00:25:19.827 08:17:50 -- nvmf/common.sh@297 -- # mlx=() 00:25:19.827 08:17:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:19.827 08:17:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:19.827 08:17:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:19.827 08:17:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:19.827 08:17:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:19.827 08:17:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:19.827 08:17:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:19.827 08:17:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:19.827 08:17:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:19.827 08:17:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:19.827 08:17:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:19.827 08:17:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:19.827 08:17:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:19.827 08:17:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:19.827 08:17:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:19.827 08:17:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:19.827 08:17:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:19.827 08:17:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:19.827 08:17:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:19.827 08:17:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:19.827 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:19.827 08:17:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:19.827 08:17:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:19.827 08:17:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.827 08:17:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.827 08:17:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:19.827 08:17:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:19.827 08:17:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:19.827 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:19.827 08:17:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:19.827 08:17:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:19.827 08:17:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.827 08:17:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.827 08:17:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:19.827 08:17:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:19.827 08:17:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:19.827 08:17:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:19.827 08:17:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:19.827 08:17:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.827 08:17:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:19.827 08:17:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.827 08:17:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:19.827 Found net devices under 0000:31:00.0: cvl_0_0 00:25:19.827 08:17:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.827 08:17:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:19.827 08:17:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.827 08:17:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:19.827 08:17:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.827 08:17:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:19.827 Found net devices under 0000:31:00.1: cvl_0_1 00:25:19.827 08:17:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.827 08:17:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:19.827 08:17:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:19.827 08:17:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:19.827 08:17:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:19.827 08:17:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:19.827 08:17:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:19.827 08:17:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:19.827 08:17:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:19.827 08:17:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:19.827 08:17:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:19.827 08:17:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:19.827 08:17:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:19.828 08:17:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:19.828 08:17:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:19.828 08:17:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:19.828 08:17:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:19.828 08:17:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:19.828 08:17:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:19.828 08:17:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:19.828 08:17:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:19.828 08:17:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:19.828 08:17:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:20.089 08:17:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:20.089 08:17:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:20.089 08:17:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:20.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:25:20.089 00:25:20.089 --- 10.0.0.2 ping statistics --- 00:25:20.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.089 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:25:20.089 08:17:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:20.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:25:20.089 00:25:20.089 --- 10.0.0.1 ping statistics --- 00:25:20.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.089 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:25:20.089 08:17:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.089 08:17:50 -- nvmf/common.sh@410 -- # return 0 00:25:20.089 08:17:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:20.089 08:17:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.089 08:17:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:20.089 08:17:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:20.089 08:17:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.089 08:17:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:20.089 08:17:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:20.089 08:17:50 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:20.089 08:17:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:20.089 08:17:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:20.089 08:17:50 -- common/autotest_common.sh@10 -- # set +x 00:25:20.089 08:17:50 -- nvmf/common.sh@469 -- # nvmfpid=1161219 00:25:20.089 08:17:50 -- nvmf/common.sh@470 -- # waitforlisten 1161219 00:25:20.089 08:17:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:20.089 08:17:50 -- common/autotest_common.sh@819 -- # '[' -z 1161219 ']' 00:25:20.089 08:17:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.089 08:17:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:20.089 08:17:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.089 08:17:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:20.089 08:17:50 -- common/autotest_common.sh@10 -- # set +x 00:25:20.089 [2024-06-11 08:17:50.623101] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:20.089 [2024-06-11 08:17:50.623163] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.089 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.089 [2024-06-11 08:17:50.710382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:20.350 [2024-06-11 08:17:50.769737] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:20.350 [2024-06-11 08:17:50.769832] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.350 [2024-06-11 08:17:50.769838] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.350 [2024-06-11 08:17:50.769844] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.350 [2024-06-11 08:17:50.769969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.350 [2024-06-11 08:17:50.770127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:20.350 [2024-06-11 08:17:50.770280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.350 [2024-06-11 08:17:50.770283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:20.922 08:17:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:20.922 08:17:51 -- common/autotest_common.sh@852 -- # return 0 00:25:20.922 08:17:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:20.922 08:17:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:20.922 08:17:51 -- common/autotest_common.sh@10 -- # set +x 00:25:20.922 08:17:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.922 08:17:51 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:20.922 08:17:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:20.922 08:17:51 -- common/autotest_common.sh@10 -- # set +x 00:25:20.922 [2024-06-11 08:17:51.437431] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.922 08:17:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:20.922 08:17:51 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:20.922 08:17:51 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:20.922 08:17:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:20.922 08:17:51 -- common/autotest_common.sh@10 -- # set +x 00:25:20.922 08:17:51 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:20.922 08:17:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.922 08:17:51 -- target/shutdown.sh@28 -- # cat 00:25:20.922 08:17:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.922 08:17:51 -- target/shutdown.sh@28 -- # cat 00:25:20.922 08:17:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.922 08:17:51 -- target/shutdown.sh@28 -- # cat 00:25:20.922 08:17:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.922 08:17:51 -- target/shutdown.sh@28 -- # cat 00:25:20.922 08:17:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.922 08:17:51 -- target/shutdown.sh@28 -- # cat 00:25:20.922 08:17:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.922 08:17:51 -- target/shutdown.sh@28 -- # cat 00:25:20.922 08:17:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.922 08:17:51 -- target/shutdown.sh@28 -- # cat 00:25:20.922 08:17:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.922 08:17:51 -- target/shutdown.sh@28 -- # cat 00:25:20.922 08:17:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.922 08:17:51 -- target/shutdown.sh@28 -- # cat 00:25:20.922 08:17:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.922 08:17:51 -- target/shutdown.sh@28 -- # cat 00:25:20.922 08:17:51 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:20.922 08:17:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:20.922 08:17:51 -- common/autotest_common.sh@10 -- # set +x 00:25:20.922 Malloc1 00:25:20.922 [2024-06-11 08:17:51.536254] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.922 Malloc2 00:25:21.197 Malloc3 00:25:21.197 Malloc4 00:25:21.197 Malloc5 00:25:21.197 Malloc6 00:25:21.197 Malloc7 00:25:21.197 Malloc8 00:25:21.197 Malloc9 00:25:21.463 Malloc10 00:25:21.463 08:17:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.463 08:17:51 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:21.463 08:17:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:21.463 08:17:51 -- common/autotest_common.sh@10 -- # set +x 00:25:21.463 08:17:51 -- target/shutdown.sh@102 -- # perfpid=1161577 00:25:21.463 08:17:51 -- target/shutdown.sh@103 -- # waitforlisten 1161577 /var/tmp/bdevperf.sock 00:25:21.463 08:17:51 -- common/autotest_common.sh@819 -- # '[' -z 1161577 ']' 00:25:21.463 08:17:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:21.463 08:17:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:21.463 08:17:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:21.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:21.463 08:17:51 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:21.463 08:17:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:21.463 08:17:51 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:21.463 08:17:51 -- common/autotest_common.sh@10 -- # set +x 00:25:21.463 08:17:51 -- nvmf/common.sh@520 -- # config=() 00:25:21.463 08:17:51 -- nvmf/common.sh@520 -- # local subsystem config 00:25:21.463 08:17:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:21.463 08:17:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:21.463 { 00:25:21.463 "params": { 00:25:21.463 "name": "Nvme$subsystem", 00:25:21.463 "trtype": "$TEST_TRANSPORT", 00:25:21.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.463 "adrfam": "ipv4", 00:25:21.463 "trsvcid": "$NVMF_PORT", 00:25:21.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.463 "hdgst": ${hdgst:-false}, 00:25:21.463 "ddgst": ${ddgst:-false} 00:25:21.463 }, 00:25:21.464 "method": "bdev_nvme_attach_controller" 00:25:21.464 } 00:25:21.464 EOF 00:25:21.464 )") 00:25:21.464 08:17:51 -- nvmf/common.sh@542 -- # cat 00:25:21.464 08:17:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:21.464 08:17:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:21.464 { 00:25:21.464 "params": { 00:25:21.464 "name": "Nvme$subsystem", 00:25:21.464 "trtype": "$TEST_TRANSPORT", 00:25:21.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.464 "adrfam": "ipv4", 00:25:21.464 "trsvcid": "$NVMF_PORT", 00:25:21.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.464 "hdgst": ${hdgst:-false}, 00:25:21.464 "ddgst": ${ddgst:-false} 00:25:21.464 }, 00:25:21.464 "method": "bdev_nvme_attach_controller" 00:25:21.464 } 00:25:21.464 EOF 00:25:21.464 )") 00:25:21.464 08:17:51 -- nvmf/common.sh@542 -- # cat 00:25:21.464 08:17:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:21.464 08:17:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:21.464 { 00:25:21.464 "params": { 00:25:21.464 "name": "Nvme$subsystem", 00:25:21.464 "trtype": "$TEST_TRANSPORT", 00:25:21.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.464 "adrfam": "ipv4", 00:25:21.464 "trsvcid": "$NVMF_PORT", 00:25:21.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.464 "hdgst": ${hdgst:-false}, 00:25:21.464 "ddgst": ${ddgst:-false} 00:25:21.464 }, 00:25:21.464 "method": "bdev_nvme_attach_controller" 00:25:21.464 } 00:25:21.464 EOF 00:25:21.464 )") 00:25:21.464 08:17:51 -- nvmf/common.sh@542 -- # cat 00:25:21.464 08:17:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:21.464 08:17:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:21.464 { 00:25:21.464 "params": { 00:25:21.464 "name": "Nvme$subsystem", 00:25:21.464 "trtype": "$TEST_TRANSPORT", 00:25:21.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.464 "adrfam": "ipv4", 00:25:21.464 "trsvcid": "$NVMF_PORT", 00:25:21.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.464 "hdgst": ${hdgst:-false}, 00:25:21.464 "ddgst": ${ddgst:-false} 00:25:21.464 }, 00:25:21.464 "method": "bdev_nvme_attach_controller" 00:25:21.464 } 00:25:21.464 EOF 00:25:21.464 )") 00:25:21.464 08:17:51 -- nvmf/common.sh@542 -- # cat 00:25:21.464 08:17:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:21.464 08:17:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:21.464 { 00:25:21.464 "params": { 00:25:21.464 "name": "Nvme$subsystem", 00:25:21.464 "trtype": "$TEST_TRANSPORT", 00:25:21.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.464 "adrfam": "ipv4", 00:25:21.464 "trsvcid": "$NVMF_PORT", 00:25:21.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.464 "hdgst": ${hdgst:-false}, 00:25:21.464 "ddgst": ${ddgst:-false} 00:25:21.464 }, 00:25:21.464 "method": "bdev_nvme_attach_controller" 00:25:21.464 } 00:25:21.464 EOF 00:25:21.464 )") 00:25:21.464 08:17:51 -- nvmf/common.sh@542 -- # cat 00:25:21.464 08:17:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:21.464 08:17:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:21.464 { 00:25:21.464 "params": { 00:25:21.464 "name": "Nvme$subsystem", 00:25:21.464 "trtype": "$TEST_TRANSPORT", 00:25:21.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.464 "adrfam": "ipv4", 00:25:21.464 "trsvcid": "$NVMF_PORT", 00:25:21.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.464 "hdgst": ${hdgst:-false}, 00:25:21.464 "ddgst": ${ddgst:-false} 00:25:21.464 }, 00:25:21.464 "method": "bdev_nvme_attach_controller" 00:25:21.464 } 00:25:21.464 EOF 00:25:21.464 )") 00:25:21.464 08:17:51 -- nvmf/common.sh@542 -- # cat 00:25:21.464 [2024-06-11 08:17:51.976817] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:21.464 [2024-06-11 08:17:51.976871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161577 ] 00:25:21.464 08:17:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:21.464 08:17:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:21.464 { 00:25:21.464 "params": { 00:25:21.464 "name": "Nvme$subsystem", 00:25:21.464 "trtype": "$TEST_TRANSPORT", 00:25:21.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.464 "adrfam": "ipv4", 00:25:21.464 "trsvcid": "$NVMF_PORT", 00:25:21.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.464 "hdgst": ${hdgst:-false}, 00:25:21.464 "ddgst": ${ddgst:-false} 00:25:21.464 }, 00:25:21.464 "method": "bdev_nvme_attach_controller" 00:25:21.464 } 00:25:21.464 EOF 00:25:21.464 )") 00:25:21.464 08:17:51 -- nvmf/common.sh@542 -- # cat 00:25:21.464 08:17:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:21.464 08:17:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:21.464 { 00:25:21.464 "params": { 00:25:21.464 "name": "Nvme$subsystem", 00:25:21.464 "trtype": "$TEST_TRANSPORT", 00:25:21.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.464 "adrfam": "ipv4", 00:25:21.464 "trsvcid": "$NVMF_PORT", 00:25:21.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.464 "hdgst": ${hdgst:-false}, 00:25:21.464 "ddgst": ${ddgst:-false} 00:25:21.464 }, 00:25:21.464 "method": "bdev_nvme_attach_controller" 00:25:21.464 } 00:25:21.464 EOF 00:25:21.464 )") 00:25:21.464 08:17:51 -- nvmf/common.sh@542 -- # cat 00:25:21.464 08:17:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:21.464 08:17:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:21.464 { 00:25:21.464 "params": { 00:25:21.464 "name": "Nvme$subsystem", 00:25:21.464 "trtype": "$TEST_TRANSPORT", 00:25:21.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.464 "adrfam": "ipv4", 00:25:21.464 "trsvcid": "$NVMF_PORT", 00:25:21.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.464 "hdgst": ${hdgst:-false}, 00:25:21.464 "ddgst": ${ddgst:-false} 00:25:21.464 }, 00:25:21.464 "method": "bdev_nvme_attach_controller" 00:25:21.464 } 00:25:21.464 EOF 00:25:21.464 )") 00:25:21.464 08:17:51 -- nvmf/common.sh@542 -- # cat 00:25:21.464 08:17:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:21.464 08:17:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:21.464 { 00:25:21.464 "params": { 00:25:21.464 "name": "Nvme$subsystem", 00:25:21.464 "trtype": "$TEST_TRANSPORT", 00:25:21.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:21.464 "adrfam": "ipv4", 00:25:21.464 "trsvcid": "$NVMF_PORT", 00:25:21.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:21.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:21.464 "hdgst": ${hdgst:-false}, 00:25:21.464 "ddgst": ${ddgst:-false} 00:25:21.464 }, 00:25:21.464 "method": "bdev_nvme_attach_controller" 00:25:21.464 } 00:25:21.464 EOF 00:25:21.464 )") 00:25:21.464 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.464 08:17:52 -- nvmf/common.sh@542 -- # cat 00:25:21.464 08:17:52 -- nvmf/common.sh@544 -- # jq . 00:25:21.464 08:17:52 -- nvmf/common.sh@545 -- # IFS=, 00:25:21.464 08:17:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:21.464 "params": { 00:25:21.464 "name": "Nvme1", 00:25:21.464 "trtype": "tcp", 00:25:21.464 "traddr": "10.0.0.2", 00:25:21.464 "adrfam": "ipv4", 00:25:21.464 "trsvcid": "4420", 00:25:21.464 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.464 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:21.464 "hdgst": false, 00:25:21.464 "ddgst": false 00:25:21.464 }, 00:25:21.464 "method": "bdev_nvme_attach_controller" 00:25:21.464 },{ 00:25:21.464 "params": { 00:25:21.464 "name": "Nvme2", 00:25:21.464 "trtype": "tcp", 00:25:21.464 "traddr": "10.0.0.2", 00:25:21.464 "adrfam": "ipv4", 00:25:21.464 "trsvcid": "4420", 00:25:21.464 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:21.464 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:21.464 "hdgst": false, 00:25:21.464 "ddgst": false 00:25:21.464 }, 00:25:21.464 "method": "bdev_nvme_attach_controller" 00:25:21.464 },{ 00:25:21.464 "params": { 00:25:21.464 "name": "Nvme3", 00:25:21.464 "trtype": "tcp", 00:25:21.464 "traddr": "10.0.0.2", 00:25:21.464 "adrfam": "ipv4", 00:25:21.464 "trsvcid": "4420", 00:25:21.464 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:21.464 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:21.464 "hdgst": false, 00:25:21.464 "ddgst": false 00:25:21.464 }, 00:25:21.464 "method": "bdev_nvme_attach_controller" 00:25:21.464 },{ 00:25:21.464 "params": { 00:25:21.464 "name": "Nvme4", 00:25:21.464 "trtype": "tcp", 00:25:21.464 "traddr": "10.0.0.2", 00:25:21.464 "adrfam": "ipv4", 00:25:21.464 "trsvcid": "4420", 00:25:21.464 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:21.464 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:21.464 "hdgst": false, 00:25:21.464 "ddgst": false 00:25:21.464 }, 00:25:21.464 "method": "bdev_nvme_attach_controller" 00:25:21.464 },{ 00:25:21.464 "params": { 00:25:21.464 "name": "Nvme5", 00:25:21.464 "trtype": "tcp", 00:25:21.464 "traddr": "10.0.0.2", 00:25:21.465 "adrfam": "ipv4", 00:25:21.465 "trsvcid": "4420", 00:25:21.465 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:21.465 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:21.465 "hdgst": false, 00:25:21.465 "ddgst": false 00:25:21.465 }, 00:25:21.465 "method": "bdev_nvme_attach_controller" 00:25:21.465 },{ 00:25:21.465 "params": { 00:25:21.465 "name": "Nvme6", 00:25:21.465 "trtype": "tcp", 00:25:21.465 "traddr": "10.0.0.2", 00:25:21.465 "adrfam": "ipv4", 00:25:21.465 "trsvcid": "4420", 00:25:21.465 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:21.465 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:21.465 "hdgst": false, 00:25:21.465 "ddgst": false 00:25:21.465 }, 00:25:21.465 "method": "bdev_nvme_attach_controller" 00:25:21.465 },{ 00:25:21.465 "params": { 00:25:21.465 "name": "Nvme7", 00:25:21.465 "trtype": "tcp", 00:25:21.465 "traddr": "10.0.0.2", 00:25:21.465 "adrfam": "ipv4", 00:25:21.465 "trsvcid": "4420", 00:25:21.465 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:21.465 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:21.465 "hdgst": false, 00:25:21.465 "ddgst": false 00:25:21.465 }, 00:25:21.465 "method": "bdev_nvme_attach_controller" 00:25:21.465 },{ 00:25:21.465 "params": { 00:25:21.465 "name": "Nvme8", 00:25:21.465 "trtype": "tcp", 00:25:21.465 "traddr": "10.0.0.2", 00:25:21.465 "adrfam": "ipv4", 00:25:21.465 "trsvcid": "4420", 00:25:21.465 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:21.465 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:21.465 "hdgst": false, 00:25:21.465 "ddgst": false 00:25:21.465 }, 00:25:21.465 "method": "bdev_nvme_attach_controller" 00:25:21.465 },{ 00:25:21.465 "params": { 00:25:21.465 "name": "Nvme9", 00:25:21.465 "trtype": "tcp", 00:25:21.465 "traddr": "10.0.0.2", 00:25:21.465 "adrfam": "ipv4", 00:25:21.465 "trsvcid": "4420", 00:25:21.465 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:21.465 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:21.465 "hdgst": false, 00:25:21.465 "ddgst": false 00:25:21.465 }, 00:25:21.465 "method": "bdev_nvme_attach_controller" 00:25:21.465 },{ 00:25:21.465 "params": { 00:25:21.465 "name": "Nvme10", 00:25:21.465 "trtype": "tcp", 00:25:21.465 "traddr": "10.0.0.2", 00:25:21.465 "adrfam": "ipv4", 00:25:21.465 "trsvcid": "4420", 00:25:21.465 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:21.465 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:21.465 "hdgst": false, 00:25:21.465 "ddgst": false 00:25:21.465 }, 00:25:21.465 "method": "bdev_nvme_attach_controller" 00:25:21.465 }' 00:25:21.465 [2024-06-11 08:17:52.037669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.465 [2024-06-11 08:17:52.100207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.850 Running I/O for 10 seconds... 00:25:23.422 08:17:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:23.422 08:17:54 -- common/autotest_common.sh@852 -- # return 0 00:25:23.422 08:17:54 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:23.422 08:17:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.422 08:17:54 -- common/autotest_common.sh@10 -- # set +x 00:25:23.422 08:17:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.422 08:17:54 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:23.422 08:17:54 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:23.422 08:17:54 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:23.422 08:17:54 -- target/shutdown.sh@57 -- # local ret=1 00:25:23.422 08:17:54 -- target/shutdown.sh@58 -- # local i 00:25:23.422 08:17:54 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:23.422 08:17:54 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:23.422 08:17:54 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:23.422 08:17:54 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:23.422 08:17:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.422 08:17:54 -- common/autotest_common.sh@10 -- # set +x 00:25:23.683 08:17:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.683 08:17:54 -- target/shutdown.sh@60 -- # read_io_count=215 00:25:23.683 08:17:54 -- target/shutdown.sh@63 -- # '[' 215 -ge 100 ']' 00:25:23.683 08:17:54 -- target/shutdown.sh@64 -- # ret=0 00:25:23.683 08:17:54 -- target/shutdown.sh@65 -- # break 00:25:23.683 08:17:54 -- target/shutdown.sh@69 -- # return 0 00:25:23.683 08:17:54 -- target/shutdown.sh@109 -- # killprocess 1161577 00:25:23.683 08:17:54 -- common/autotest_common.sh@926 -- # '[' -z 1161577 ']' 00:25:23.683 08:17:54 -- common/autotest_common.sh@930 -- # kill -0 1161577 00:25:23.683 08:17:54 -- common/autotest_common.sh@931 -- # uname 00:25:23.683 08:17:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:23.683 08:17:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1161577 00:25:23.683 08:17:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:23.683 08:17:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:23.683 08:17:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1161577' 00:25:23.683 killing process with pid 1161577 00:25:23.683 08:17:54 -- common/autotest_common.sh@945 -- # kill 1161577 00:25:23.683 08:17:54 -- common/autotest_common.sh@950 -- # wait 1161577 00:25:23.683 Received shutdown signal, test time was about 0.755658 seconds 00:25:23.683 00:25:23.683 Latency(us) 00:25:23.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.683 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:23.683 Verification LBA range: start 0x0 length 0x400 00:25:23.683 Nvme1n1 : 0.71 453.28 28.33 0.00 0.00 138172.58 15510.19 135441.07 00:25:23.683 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:23.683 Verification LBA range: start 0x0 length 0x400 00:25:23.683 Nvme2n1 : 0.71 443.62 27.73 0.00 0.00 138994.23 19879.25 142431.57 00:25:23.683 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:23.683 Verification LBA range: start 0x0 length 0x400 00:25:23.683 Nvme3n1 : 0.74 424.64 26.54 0.00 0.00 135981.65 20097.71 111848.11 00:25:23.683 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:23.683 Verification LBA range: start 0x0 length 0x400 00:25:23.683 Nvme4n1 : 0.70 448.55 28.03 0.00 0.00 134304.76 18350.08 110537.39 00:25:23.683 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:23.683 Verification LBA range: start 0x0 length 0x400 00:25:23.683 Nvme5n1 : 0.70 447.26 27.95 0.00 0.00 133264.51 18022.40 106168.32 00:25:23.683 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:23.683 Verification LBA range: start 0x0 length 0x400 00:25:23.683 Nvme6n1 : 0.71 446.40 27.90 0.00 0.00 131869.53 18350.08 105294.51 00:25:23.683 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:23.683 Verification LBA range: start 0x0 length 0x400 00:25:23.684 Nvme7n1 : 0.71 442.41 27.65 0.00 0.00 131688.87 16820.91 110974.29 00:25:23.684 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:23.684 Verification LBA range: start 0x0 length 0x400 00:25:23.684 Nvme8n1 : 0.75 417.34 26.08 0.00 0.00 130798.07 16493.23 111848.11 00:25:23.684 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:23.684 Verification LBA range: start 0x0 length 0x400 00:25:23.684 Nvme9n1 : 0.68 397.41 24.84 0.00 0.00 142689.58 14090.24 114469.55 00:25:23.684 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:23.684 Verification LBA range: start 0x0 length 0x400 00:25:23.684 Nvme10n1 : 0.69 398.07 24.88 0.00 0.00 140467.58 9229.65 114469.55 00:25:23.684 =================================================================================================================== 00:25:23.684 Total : 4318.97 269.94 0.00 0.00 135671.19 9229.65 142431.57 00:25:23.944 08:17:54 -- target/shutdown.sh@112 -- # sleep 1 00:25:24.886 08:17:55 -- target/shutdown.sh@113 -- # kill -0 1161219 00:25:24.886 08:17:55 -- target/shutdown.sh@115 -- # stoptarget 00:25:24.886 08:17:55 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:24.886 08:17:55 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:24.886 08:17:55 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:24.886 08:17:55 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:24.886 08:17:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:24.886 08:17:55 -- nvmf/common.sh@116 -- # sync 00:25:24.886 08:17:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:24.886 08:17:55 -- nvmf/common.sh@119 -- # set +e 00:25:24.886 08:17:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:24.886 08:17:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:24.886 rmmod nvme_tcp 00:25:24.886 rmmod nvme_fabrics 00:25:24.886 rmmod nvme_keyring 00:25:24.886 08:17:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:24.886 08:17:55 -- nvmf/common.sh@123 -- # set -e 00:25:24.886 08:17:55 -- nvmf/common.sh@124 -- # return 0 00:25:24.886 08:17:55 -- nvmf/common.sh@477 -- # '[' -n 1161219 ']' 00:25:24.886 08:17:55 -- nvmf/common.sh@478 -- # killprocess 1161219 00:25:24.886 08:17:55 -- common/autotest_common.sh@926 -- # '[' -z 1161219 ']' 00:25:24.886 08:17:55 -- common/autotest_common.sh@930 -- # kill -0 1161219 00:25:24.886 08:17:55 -- common/autotest_common.sh@931 -- # uname 00:25:24.886 08:17:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:24.886 08:17:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1161219 00:25:25.147 08:17:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:25.147 08:17:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:25.147 08:17:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1161219' 00:25:25.147 killing process with pid 1161219 00:25:25.147 08:17:55 -- common/autotest_common.sh@945 -- # kill 1161219 00:25:25.147 08:17:55 -- common/autotest_common.sh@950 -- # wait 1161219 00:25:25.147 08:17:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:25.147 08:17:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:25.147 08:17:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:25.147 08:17:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:25.147 08:17:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:25.147 08:17:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.147 08:17:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:25.147 08:17:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.694 08:17:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:27.694 00:25:27.694 real 0m7.641s 00:25:27.694 user 0m22.698s 00:25:27.694 sys 0m1.231s 00:25:27.694 08:17:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:27.694 08:17:57 -- common/autotest_common.sh@10 -- # set +x 00:25:27.694 ************************************ 00:25:27.694 END TEST nvmf_shutdown_tc2 00:25:27.694 ************************************ 00:25:27.694 08:17:57 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:27.694 08:17:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:27.694 08:17:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:27.694 08:17:57 -- common/autotest_common.sh@10 -- # set +x 00:25:27.694 ************************************ 00:25:27.694 START TEST nvmf_shutdown_tc3 00:25:27.694 ************************************ 00:25:27.694 08:17:57 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:25:27.694 08:17:57 -- target/shutdown.sh@120 -- # starttarget 00:25:27.694 08:17:57 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:27.694 08:17:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:27.694 08:17:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.694 08:17:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:27.694 08:17:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:27.694 08:17:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:27.694 08:17:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.694 08:17:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.694 08:17:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.694 08:17:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:27.694 08:17:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:27.694 08:17:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:27.694 08:17:57 -- common/autotest_common.sh@10 -- # set +x 00:25:27.694 08:17:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:27.694 08:17:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:27.694 08:17:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:27.694 08:17:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:27.694 08:17:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:27.694 08:17:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:27.694 08:17:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:27.694 08:17:57 -- nvmf/common.sh@294 -- # net_devs=() 00:25:27.694 08:17:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:27.694 08:17:57 -- nvmf/common.sh@295 -- # e810=() 00:25:27.694 08:17:57 -- nvmf/common.sh@295 -- # local -ga e810 00:25:27.694 08:17:57 -- nvmf/common.sh@296 -- # x722=() 00:25:27.694 08:17:57 -- nvmf/common.sh@296 -- # local -ga x722 00:25:27.694 08:17:57 -- nvmf/common.sh@297 -- # mlx=() 00:25:27.694 08:17:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:27.694 08:17:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:27.694 08:17:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:27.694 08:17:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:27.694 08:17:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:27.694 08:17:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:27.694 08:17:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:27.694 08:17:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:27.694 08:17:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:27.694 08:17:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:27.694 08:17:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:27.694 08:17:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:27.694 08:17:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:27.694 08:17:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:27.694 08:17:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:27.694 08:17:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:27.694 08:17:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:27.694 08:17:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:27.694 08:17:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:27.694 08:17:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:27.694 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:27.694 08:17:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:27.694 08:17:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:27.694 08:17:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.694 08:17:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.694 08:17:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:27.694 08:17:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:27.694 08:17:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:27.694 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:27.694 08:17:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:27.694 08:17:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:27.694 08:17:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.694 08:17:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.694 08:17:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:27.694 08:17:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:27.694 08:17:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:27.694 08:17:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:27.694 08:17:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:27.694 08:17:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.694 08:17:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:27.694 08:17:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.694 08:17:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:27.694 Found net devices under 0000:31:00.0: cvl_0_0 00:25:27.695 08:17:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.695 08:17:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:27.695 08:17:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.695 08:17:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:27.695 08:17:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.695 08:17:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:27.695 Found net devices under 0000:31:00.1: cvl_0_1 00:25:27.695 08:17:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.695 08:17:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:27.695 08:17:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:27.695 08:17:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:27.695 08:17:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:27.695 08:17:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:27.695 08:17:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:27.695 08:17:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:27.695 08:17:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:27.695 08:17:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:27.695 08:17:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:27.695 08:17:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:27.695 08:17:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:27.695 08:17:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:27.695 08:17:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:27.695 08:17:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:27.695 08:17:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:27.695 08:17:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:27.695 08:17:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:27.695 08:17:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:27.695 08:17:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:27.695 08:17:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:27.695 08:17:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:27.695 08:17:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:27.695 08:17:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:27.695 08:17:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:27.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:27.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:25:27.695 00:25:27.695 --- 10.0.0.2 ping statistics --- 00:25:27.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.695 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:25:27.695 08:17:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:27.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:27.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:25:27.695 00:25:27.695 --- 10.0.0.1 ping statistics --- 00:25:27.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.695 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:25:27.695 08:17:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:27.695 08:17:58 -- nvmf/common.sh@410 -- # return 0 00:25:27.695 08:17:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:27.695 08:17:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:27.695 08:17:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:27.695 08:17:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:27.695 08:17:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:27.695 08:17:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:27.695 08:17:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:27.695 08:17:58 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:27.695 08:17:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:27.695 08:17:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:27.695 08:17:58 -- common/autotest_common.sh@10 -- # set +x 00:25:27.695 08:17:58 -- nvmf/common.sh@469 -- # nvmfpid=1162777 00:25:27.695 08:17:58 -- nvmf/common.sh@470 -- # waitforlisten 1162777 00:25:27.695 08:17:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:27.695 08:17:58 -- common/autotest_common.sh@819 -- # '[' -z 1162777 ']' 00:25:27.695 08:17:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.695 08:17:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:27.695 08:17:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.695 08:17:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:27.695 08:17:58 -- common/autotest_common.sh@10 -- # set +x 00:25:27.695 [2024-06-11 08:17:58.310344] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:27.695 [2024-06-11 08:17:58.310407] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.957 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.957 [2024-06-11 08:17:58.397794] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:27.957 [2024-06-11 08:17:58.458210] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:27.957 [2024-06-11 08:17:58.458307] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.957 [2024-06-11 08:17:58.458313] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.957 [2024-06-11 08:17:58.458318] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.957 [2024-06-11 08:17:58.458447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:27.957 [2024-06-11 08:17:58.458585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:27.957 [2024-06-11 08:17:58.458717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.957 [2024-06-11 08:17:58.458719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:28.528 08:17:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:28.528 08:17:59 -- common/autotest_common.sh@852 -- # return 0 00:25:28.528 08:17:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:28.528 08:17:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:28.528 08:17:59 -- common/autotest_common.sh@10 -- # set +x 00:25:28.528 08:17:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.528 08:17:59 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:28.528 08:17:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.528 08:17:59 -- common/autotest_common.sh@10 -- # set +x 00:25:28.528 [2024-06-11 08:17:59.128422] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.528 08:17:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.528 08:17:59 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:28.528 08:17:59 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:28.528 08:17:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:28.528 08:17:59 -- common/autotest_common.sh@10 -- # set +x 00:25:28.528 08:17:59 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:28.528 08:17:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.528 08:17:59 -- target/shutdown.sh@28 -- # cat 00:25:28.528 08:17:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.528 08:17:59 -- target/shutdown.sh@28 -- # cat 00:25:28.528 08:17:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.528 08:17:59 -- target/shutdown.sh@28 -- # cat 00:25:28.528 08:17:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.528 08:17:59 -- target/shutdown.sh@28 -- # cat 00:25:28.528 08:17:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.528 08:17:59 -- target/shutdown.sh@28 -- # cat 00:25:28.528 08:17:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.528 08:17:59 -- target/shutdown.sh@28 -- # cat 00:25:28.528 08:17:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.528 08:17:59 -- target/shutdown.sh@28 -- # cat 00:25:28.528 08:17:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.790 08:17:59 -- target/shutdown.sh@28 -- # cat 00:25:28.790 08:17:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.790 08:17:59 -- target/shutdown.sh@28 -- # cat 00:25:28.790 08:17:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:28.790 08:17:59 -- target/shutdown.sh@28 -- # cat 00:25:28.790 08:17:59 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:28.790 08:17:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.790 08:17:59 -- common/autotest_common.sh@10 -- # set +x 00:25:28.790 Malloc1 00:25:28.790 [2024-06-11 08:17:59.223299] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.790 Malloc2 00:25:28.790 Malloc3 00:25:28.790 Malloc4 00:25:28.790 Malloc5 00:25:28.790 Malloc6 00:25:28.790 Malloc7 00:25:29.051 Malloc8 00:25:29.051 Malloc9 00:25:29.051 Malloc10 00:25:29.051 08:17:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.051 08:17:59 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:29.051 08:17:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:29.051 08:17:59 -- common/autotest_common.sh@10 -- # set +x 00:25:29.051 08:17:59 -- target/shutdown.sh@124 -- # perfpid=1163139 00:25:29.051 08:17:59 -- target/shutdown.sh@125 -- # waitforlisten 1163139 /var/tmp/bdevperf.sock 00:25:29.051 08:17:59 -- common/autotest_common.sh@819 -- # '[' -z 1163139 ']' 00:25:29.051 08:17:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:29.051 08:17:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:29.051 08:17:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:29.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:29.051 08:17:59 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:29.051 08:17:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:29.051 08:17:59 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:29.051 08:17:59 -- common/autotest_common.sh@10 -- # set +x 00:25:29.051 08:17:59 -- nvmf/common.sh@520 -- # config=() 00:25:29.051 08:17:59 -- nvmf/common.sh@520 -- # local subsystem config 00:25:29.051 08:17:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:29.051 08:17:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:29.051 { 00:25:29.051 "params": { 00:25:29.051 "name": "Nvme$subsystem", 00:25:29.051 "trtype": "$TEST_TRANSPORT", 00:25:29.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.051 "adrfam": "ipv4", 00:25:29.051 "trsvcid": "$NVMF_PORT", 00:25:29.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.051 "hdgst": ${hdgst:-false}, 00:25:29.051 "ddgst": ${ddgst:-false} 00:25:29.051 }, 00:25:29.051 "method": "bdev_nvme_attach_controller" 00:25:29.051 } 00:25:29.051 EOF 00:25:29.051 )") 00:25:29.051 08:17:59 -- nvmf/common.sh@542 -- # cat 00:25:29.051 08:17:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:29.051 08:17:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:29.051 { 00:25:29.051 "params": { 00:25:29.051 "name": "Nvme$subsystem", 00:25:29.051 "trtype": "$TEST_TRANSPORT", 00:25:29.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.051 "adrfam": "ipv4", 00:25:29.051 "trsvcid": "$NVMF_PORT", 00:25:29.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.051 "hdgst": ${hdgst:-false}, 00:25:29.051 "ddgst": ${ddgst:-false} 00:25:29.051 }, 00:25:29.051 "method": "bdev_nvme_attach_controller" 00:25:29.051 } 00:25:29.051 EOF 00:25:29.051 )") 00:25:29.051 08:17:59 -- nvmf/common.sh@542 -- # cat 00:25:29.051 08:17:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:29.051 08:17:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:29.051 { 00:25:29.051 "params": { 00:25:29.051 "name": "Nvme$subsystem", 00:25:29.051 "trtype": "$TEST_TRANSPORT", 00:25:29.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.051 "adrfam": "ipv4", 00:25:29.051 "trsvcid": "$NVMF_PORT", 00:25:29.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.051 "hdgst": ${hdgst:-false}, 00:25:29.051 "ddgst": ${ddgst:-false} 00:25:29.051 }, 00:25:29.051 "method": "bdev_nvme_attach_controller" 00:25:29.051 } 00:25:29.051 EOF 00:25:29.051 )") 00:25:29.051 08:17:59 -- nvmf/common.sh@542 -- # cat 00:25:29.051 08:17:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:29.051 08:17:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:29.051 { 00:25:29.051 "params": { 00:25:29.051 "name": "Nvme$subsystem", 00:25:29.051 "trtype": "$TEST_TRANSPORT", 00:25:29.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.051 "adrfam": "ipv4", 00:25:29.051 "trsvcid": "$NVMF_PORT", 00:25:29.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.051 "hdgst": ${hdgst:-false}, 00:25:29.051 "ddgst": ${ddgst:-false} 00:25:29.051 }, 00:25:29.051 "method": "bdev_nvme_attach_controller" 00:25:29.051 } 00:25:29.051 EOF 00:25:29.051 )") 00:25:29.051 08:17:59 -- nvmf/common.sh@542 -- # cat 00:25:29.051 08:17:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:29.051 08:17:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:29.051 { 00:25:29.051 "params": { 00:25:29.051 "name": "Nvme$subsystem", 00:25:29.051 "trtype": "$TEST_TRANSPORT", 00:25:29.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.051 "adrfam": "ipv4", 00:25:29.051 "trsvcid": "$NVMF_PORT", 00:25:29.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.051 "hdgst": ${hdgst:-false}, 00:25:29.051 "ddgst": ${ddgst:-false} 00:25:29.051 }, 00:25:29.051 "method": "bdev_nvme_attach_controller" 00:25:29.051 } 00:25:29.051 EOF 00:25:29.051 )") 00:25:29.051 08:17:59 -- nvmf/common.sh@542 -- # cat 00:25:29.051 08:17:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:29.051 08:17:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:29.051 { 00:25:29.051 "params": { 00:25:29.051 "name": "Nvme$subsystem", 00:25:29.051 "trtype": "$TEST_TRANSPORT", 00:25:29.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.051 "adrfam": "ipv4", 00:25:29.051 "trsvcid": "$NVMF_PORT", 00:25:29.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.051 "hdgst": ${hdgst:-false}, 00:25:29.051 "ddgst": ${ddgst:-false} 00:25:29.051 }, 00:25:29.051 "method": "bdev_nvme_attach_controller" 00:25:29.051 } 00:25:29.051 EOF 00:25:29.051 )") 00:25:29.051 08:17:59 -- nvmf/common.sh@542 -- # cat 00:25:29.051 [2024-06-11 08:17:59.663492] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:29.052 [2024-06-11 08:17:59.663545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163139 ] 00:25:29.052 08:17:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:29.052 08:17:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:29.052 { 00:25:29.052 "params": { 00:25:29.052 "name": "Nvme$subsystem", 00:25:29.052 "trtype": "$TEST_TRANSPORT", 00:25:29.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.052 "adrfam": "ipv4", 00:25:29.052 "trsvcid": "$NVMF_PORT", 00:25:29.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.052 "hdgst": ${hdgst:-false}, 00:25:29.052 "ddgst": ${ddgst:-false} 00:25:29.052 }, 00:25:29.052 "method": "bdev_nvme_attach_controller" 00:25:29.052 } 00:25:29.052 EOF 00:25:29.052 )") 00:25:29.052 08:17:59 -- nvmf/common.sh@542 -- # cat 00:25:29.052 08:17:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:29.052 08:17:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:29.052 { 00:25:29.052 "params": { 00:25:29.052 "name": "Nvme$subsystem", 00:25:29.052 "trtype": "$TEST_TRANSPORT", 00:25:29.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.052 "adrfam": "ipv4", 00:25:29.052 "trsvcid": "$NVMF_PORT", 00:25:29.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.052 "hdgst": ${hdgst:-false}, 00:25:29.052 "ddgst": ${ddgst:-false} 00:25:29.052 }, 00:25:29.052 "method": "bdev_nvme_attach_controller" 00:25:29.052 } 00:25:29.052 EOF 00:25:29.052 )") 00:25:29.052 08:17:59 -- nvmf/common.sh@542 -- # cat 00:25:29.052 08:17:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:29.052 08:17:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:29.052 { 00:25:29.052 "params": { 00:25:29.052 "name": "Nvme$subsystem", 00:25:29.052 "trtype": "$TEST_TRANSPORT", 00:25:29.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.052 "adrfam": "ipv4", 00:25:29.052 "trsvcid": "$NVMF_PORT", 00:25:29.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.052 "hdgst": ${hdgst:-false}, 00:25:29.052 "ddgst": ${ddgst:-false} 00:25:29.052 }, 00:25:29.052 "method": "bdev_nvme_attach_controller" 00:25:29.052 } 00:25:29.052 EOF 00:25:29.052 )") 00:25:29.052 08:17:59 -- nvmf/common.sh@542 -- # cat 00:25:29.052 08:17:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:29.052 08:17:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:29.052 { 00:25:29.052 "params": { 00:25:29.052 "name": "Nvme$subsystem", 00:25:29.052 "trtype": "$TEST_TRANSPORT", 00:25:29.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.052 "adrfam": "ipv4", 00:25:29.052 "trsvcid": "$NVMF_PORT", 00:25:29.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.052 "hdgst": ${hdgst:-false}, 00:25:29.052 "ddgst": ${ddgst:-false} 00:25:29.052 }, 00:25:29.052 "method": "bdev_nvme_attach_controller" 00:25:29.052 } 00:25:29.052 EOF 00:25:29.052 )") 00:25:29.052 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.052 08:17:59 -- nvmf/common.sh@542 -- # cat 00:25:29.052 08:17:59 -- nvmf/common.sh@544 -- # jq . 00:25:29.313 08:17:59 -- nvmf/common.sh@545 -- # IFS=, 00:25:29.313 08:17:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:29.313 "params": { 00:25:29.313 "name": "Nvme1", 00:25:29.313 "trtype": "tcp", 00:25:29.313 "traddr": "10.0.0.2", 00:25:29.313 "adrfam": "ipv4", 00:25:29.313 "trsvcid": "4420", 00:25:29.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:29.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:29.313 "hdgst": false, 00:25:29.313 "ddgst": false 00:25:29.313 }, 00:25:29.313 "method": "bdev_nvme_attach_controller" 00:25:29.313 },{ 00:25:29.313 "params": { 00:25:29.313 "name": "Nvme2", 00:25:29.313 "trtype": "tcp", 00:25:29.313 "traddr": "10.0.0.2", 00:25:29.313 "adrfam": "ipv4", 00:25:29.313 "trsvcid": "4420", 00:25:29.313 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:29.313 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:29.313 "hdgst": false, 00:25:29.313 "ddgst": false 00:25:29.313 }, 00:25:29.313 "method": "bdev_nvme_attach_controller" 00:25:29.313 },{ 00:25:29.313 "params": { 00:25:29.313 "name": "Nvme3", 00:25:29.313 "trtype": "tcp", 00:25:29.313 "traddr": "10.0.0.2", 00:25:29.313 "adrfam": "ipv4", 00:25:29.313 "trsvcid": "4420", 00:25:29.313 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:29.313 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:29.313 "hdgst": false, 00:25:29.313 "ddgst": false 00:25:29.313 }, 00:25:29.313 "method": "bdev_nvme_attach_controller" 00:25:29.313 },{ 00:25:29.313 "params": { 00:25:29.313 "name": "Nvme4", 00:25:29.313 "trtype": "tcp", 00:25:29.313 "traddr": "10.0.0.2", 00:25:29.313 "adrfam": "ipv4", 00:25:29.313 "trsvcid": "4420", 00:25:29.313 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:29.313 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:29.313 "hdgst": false, 00:25:29.313 "ddgst": false 00:25:29.313 }, 00:25:29.313 "method": "bdev_nvme_attach_controller" 00:25:29.313 },{ 00:25:29.313 "params": { 00:25:29.313 "name": "Nvme5", 00:25:29.313 "trtype": "tcp", 00:25:29.313 "traddr": "10.0.0.2", 00:25:29.313 "adrfam": "ipv4", 00:25:29.313 "trsvcid": "4420", 00:25:29.313 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:29.313 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:29.313 "hdgst": false, 00:25:29.313 "ddgst": false 00:25:29.313 }, 00:25:29.313 "method": "bdev_nvme_attach_controller" 00:25:29.313 },{ 00:25:29.313 "params": { 00:25:29.313 "name": "Nvme6", 00:25:29.313 "trtype": "tcp", 00:25:29.313 "traddr": "10.0.0.2", 00:25:29.313 "adrfam": "ipv4", 00:25:29.313 "trsvcid": "4420", 00:25:29.313 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:29.313 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:29.313 "hdgst": false, 00:25:29.313 "ddgst": false 00:25:29.313 }, 00:25:29.313 "method": "bdev_nvme_attach_controller" 00:25:29.313 },{ 00:25:29.313 "params": { 00:25:29.313 "name": "Nvme7", 00:25:29.313 "trtype": "tcp", 00:25:29.313 "traddr": "10.0.0.2", 00:25:29.313 "adrfam": "ipv4", 00:25:29.313 "trsvcid": "4420", 00:25:29.313 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:29.313 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:29.313 "hdgst": false, 00:25:29.313 "ddgst": false 00:25:29.313 }, 00:25:29.313 "method": "bdev_nvme_attach_controller" 00:25:29.313 },{ 00:25:29.313 "params": { 00:25:29.313 "name": "Nvme8", 00:25:29.313 "trtype": "tcp", 00:25:29.313 "traddr": "10.0.0.2", 00:25:29.313 "adrfam": "ipv4", 00:25:29.313 "trsvcid": "4420", 00:25:29.313 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:29.313 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:29.313 "hdgst": false, 00:25:29.313 "ddgst": false 00:25:29.313 }, 00:25:29.313 "method": "bdev_nvme_attach_controller" 00:25:29.313 },{ 00:25:29.313 "params": { 00:25:29.313 "name": "Nvme9", 00:25:29.313 "trtype": "tcp", 00:25:29.313 "traddr": "10.0.0.2", 00:25:29.313 "adrfam": "ipv4", 00:25:29.313 "trsvcid": "4420", 00:25:29.313 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:29.313 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:29.313 "hdgst": false, 00:25:29.313 "ddgst": false 00:25:29.313 }, 00:25:29.313 "method": "bdev_nvme_attach_controller" 00:25:29.313 },{ 00:25:29.313 "params": { 00:25:29.313 "name": "Nvme10", 00:25:29.313 "trtype": "tcp", 00:25:29.313 "traddr": "10.0.0.2", 00:25:29.313 "adrfam": "ipv4", 00:25:29.313 "trsvcid": "4420", 00:25:29.313 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:29.313 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:29.313 "hdgst": false, 00:25:29.313 "ddgst": false 00:25:29.313 }, 00:25:29.313 "method": "bdev_nvme_attach_controller" 00:25:29.313 }' 00:25:29.313 [2024-06-11 08:17:59.724528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.313 [2024-06-11 08:17:59.787260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.699 Running I/O for 10 seconds... 00:25:30.699 08:18:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:30.699 08:18:01 -- common/autotest_common.sh@852 -- # return 0 00:25:30.699 08:18:01 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:30.699 08:18:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.699 08:18:01 -- common/autotest_common.sh@10 -- # set +x 00:25:30.699 08:18:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.699 08:18:01 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:30.699 08:18:01 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:30.699 08:18:01 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:30.699 08:18:01 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:30.699 08:18:01 -- target/shutdown.sh@57 -- # local ret=1 00:25:30.699 08:18:01 -- target/shutdown.sh@58 -- # local i 00:25:30.699 08:18:01 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:30.699 08:18:01 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:30.699 08:18:01 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:30.699 08:18:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.699 08:18:01 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:30.699 08:18:01 -- common/autotest_common.sh@10 -- # set +x 00:25:30.699 08:18:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.699 08:18:01 -- target/shutdown.sh@60 -- # read_io_count=42 00:25:30.699 08:18:01 -- target/shutdown.sh@63 -- # '[' 42 -ge 100 ']' 00:25:30.699 08:18:01 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:30.967 08:18:01 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:30.967 08:18:01 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:30.967 08:18:01 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:30.967 08:18:01 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:30.967 08:18:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.967 08:18:01 -- common/autotest_common.sh@10 -- # set +x 00:25:30.967 08:18:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.967 08:18:01 -- target/shutdown.sh@60 -- # read_io_count=167 00:25:30.967 08:18:01 -- target/shutdown.sh@63 -- # '[' 167 -ge 100 ']' 00:25:30.967 08:18:01 -- target/shutdown.sh@64 -- # ret=0 00:25:30.967 08:18:01 -- target/shutdown.sh@65 -- # break 00:25:30.967 08:18:01 -- target/shutdown.sh@69 -- # return 0 00:25:30.967 08:18:01 -- target/shutdown.sh@134 -- # killprocess 1162777 00:25:30.967 08:18:01 -- common/autotest_common.sh@926 -- # '[' -z 1162777 ']' 00:25:30.967 08:18:01 -- common/autotest_common.sh@930 -- # kill -0 1162777 00:25:30.967 08:18:01 -- common/autotest_common.sh@931 -- # uname 00:25:30.967 08:18:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:30.967 08:18:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1162777 00:25:30.967 08:18:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:30.967 08:18:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:30.967 08:18:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1162777' 00:25:30.967 killing process with pid 1162777 00:25:30.967 08:18:01 -- common/autotest_common.sh@945 -- # kill 1162777 00:25:30.967 08:18:01 -- common/autotest_common.sh@950 -- # wait 1162777 00:25:30.967 [2024-06-11 08:18:01.578493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.967 [2024-06-11 08:18:01.578538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.967 [2024-06-11 08:18:01.578544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.967 [2024-06-11 08:18:01.578550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.967 [2024-06-11 08:18:01.578555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.967 [2024-06-11 08:18:01.578560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.967 [2024-06-11 08:18:01.578565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.967 [2024-06-11 08:18:01.578569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.967 [2024-06-11 08:18:01.578574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.967 [2024-06-11 08:18:01.578579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.967 [2024-06-11 08:18:01.578589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.578827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b470 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.580889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.580914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.580920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.580925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.580929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.580934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.580939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.580945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.580949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.580954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.580959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.580964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.580968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.580972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.580977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.580981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.580986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.580990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.580995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.581000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.581005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.581009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.581013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.581018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.581022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.581026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.581031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.581035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.581041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.581046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.581050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.581055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.968 [2024-06-11 08:18:01.581059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.581063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.581068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.581072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.581077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218dde0 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.581949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.581959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.581964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.581969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.581974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.581978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.581983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.581988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.581992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.581997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.582238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218b920 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.583663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.583686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.583691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.583696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.583701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.583706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.583711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.583716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.583721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.583729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.583733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.583738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.583743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.583747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.583752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.969 [2024-06-11 08:18:01.583756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583851] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.583971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218bdd0 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.584855] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:30.970 [2024-06-11 08:18:01.584922] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:30.970 [2024-06-11 08:18:01.585141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.585165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.585171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.585175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.585180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.585185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.585190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.585187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.970 [2024-06-11 08:18:01.585197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.585204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.970 [2024-06-11 08:18:01.585206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.585214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.585222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with [2024-06-11 08:18:01.585223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29312 len:128the state(5) to be set 00:25:30.970 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.970 [2024-06-11 08:18:01.585231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.585233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.970 [2024-06-11 08:18:01.585237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.585244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with [2024-06-11 08:18:01.585244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29440 len:12the state(5) to be set 00:25:30.970 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.970 [2024-06-11 08:18:01.585252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.585254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.970 [2024-06-11 08:18:01.585257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.585262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.585265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.970 [2024-06-11 08:18:01.585268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.585273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with [2024-06-11 08:18:01.585273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:30.970 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.970 [2024-06-11 08:18:01.585284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.585289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with [2024-06-11 08:18:01.585288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:12the state(5) to be set 00:25:30.970 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.970 [2024-06-11 08:18:01.585296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.585298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.970 [2024-06-11 08:18:01.585302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.585307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.970 [2024-06-11 08:18:01.585309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.971 [2024-06-11 08:18:01.585312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-11 08:18:01.585317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.971 the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.971 [2024-06-11 08:18:01.585329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with [2024-06-11 08:18:01.585335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:30.971 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.971 [2024-06-11 08:18:01.585344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30080 len:12[2024-06-11 08:18:01.585349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.971 the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.971 [2024-06-11 08:18:01.585362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.971 [2024-06-11 08:18:01.585374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.971 [2024-06-11 08:18:01.585381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.971 [2024-06-11 08:18:01.585392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.971 [2024-06-11 08:18:01.585397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.971 [2024-06-11 08:18:01.585408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with [2024-06-11 08:18:01.585413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:30.971 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.971 [2024-06-11 08:18:01.585420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.971 [2024-06-11 08:18:01.585431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.971 [2024-06-11 08:18:01.585436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with [2024-06-11 08:18:01.585450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30464 len:12the state(5) to be set 00:25:30.971 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.971 [2024-06-11 08:18:01.585458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.971 [2024-06-11 08:18:01.585463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.971 [2024-06-11 08:18:01.585473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.971 [2024-06-11 08:18:01.585480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.971 [2024-06-11 08:18:01.585490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.971 [2024-06-11 08:18:01.585502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.971 [2024-06-11 08:18:01.585512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.971 [2024-06-11 08:18:01.585517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.971 [2024-06-11 08:18:01.585527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.971 [2024-06-11 08:18:01.585538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c260 is same with the state(5) to be set 00:25:30.971 [2024-06-11 08:18:01.585544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.971 [2024-06-11 08:18:01.585552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.971 [2024-06-11 08:18:01.585561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.971 [2024-06-11 08:18:01.585568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.971 [2024-06-11 08:18:01.585578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.971 [2024-06-11 08:18:01.585585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.971 [2024-06-11 08:18:01.585594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.971 [2024-06-11 08:18:01.585604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.971 [2024-06-11 08:18:01.585614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.971 [2024-06-11 08:18:01.585621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.971 [2024-06-11 08:18:01.585630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.971 [2024-06-11 08:18:01.585637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.971 [2024-06-11 08:18:01.585647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.971 [2024-06-11 08:18:01.585653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.971 [2024-06-11 08:18:01.585663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.971 [2024-06-11 08:18:01.585670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.971 [2024-06-11 08:18:01.585679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.971 [2024-06-11 08:18:01.585686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.585695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.585703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.585712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.585719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.585728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.585735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.585745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.585752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.585762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.585769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.585778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.585786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.585795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.585803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.585813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.585821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.585830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.585837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.585846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.585853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.585862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.585869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.585878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.585885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.585894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.585901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.585910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.585917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.585926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.585933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.585943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.585950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.585959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.585966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.585975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.585982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.585991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.585998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.586007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.586016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.586025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.586032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.586042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.586049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.586058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.586065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.586073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.586081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.586090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.586097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.586106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.586113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.586122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.586129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.586138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.586145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.586154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.586160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.586169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.586177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.586185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.586192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.586201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.586208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.972 [2024-06-11 08:18:01.586217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.972 [2024-06-11 08:18:01.586215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.973 [2024-06-11 08:18:01.586230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.973 [2024-06-11 08:18:01.586240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-11 08:18:01.586245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.973 the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.973 [2024-06-11 08:18:01.586258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.973 [2024-06-11 08:18:01.586268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with [2024-06-11 08:18:01.586273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28672 len:12the state(5) to be set 00:25:30.973 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.973 [2024-06-11 08:18:01.586280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.973 [2024-06-11 08:18:01.586285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.973 [2024-06-11 08:18:01.586295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with [2024-06-11 08:18:01.586300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:30.973 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.973 [2024-06-11 08:18:01.586307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with [2024-06-11 08:18:01.586311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28928 len:12the state(5) to be set 00:25:30.973 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.973 [2024-06-11 08:18:01.586322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.973 [2024-06-11 08:18:01.586327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c6f0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586643] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fe8c60 was disconnected and freed. reset controller. 00:25:30.973 [2024-06-11 08:18:01.586729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.973 [2024-06-11 08:18:01.586741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.973 [2024-06-11 08:18:01.586750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.973 [2024-06-11 08:18:01.586757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.973 [2024-06-11 08:18:01.586765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.973 [2024-06-11 08:18:01.586775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.973 [2024-06-11 08:18:01.586783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.973 [2024-06-11 08:18:01.586790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.973 [2024-06-11 08:18:01.586797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e88fc0 is same with the state(5) to be set 00:25:30.973 [2024-06-11 08:18:01.586820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.974 [2024-06-11 08:18:01.586828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.974 [2024-06-11 08:18:01.586836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.974 [2024-06-11 08:18:01.586843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.974 [2024-06-11 08:18:01.586851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.974 [2024-06-11 08:18:01.586858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.974 [2024-06-11 08:18:01.586866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.974 [2024-06-11 08:18:01.586873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.974 [2024-06-11 08:18:01.586879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204caa0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.586901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.974 [2024-06-11 08:18:01.586910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.974 [2024-06-11 08:18:01.586918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.974 [2024-06-11 08:18:01.586926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.974 [2024-06-11 08:18:01.586934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.974 [2024-06-11 08:18:01.586941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.974 [2024-06-11 08:18:01.586949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.974 [2024-06-11 08:18:01.586957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.974 [2024-06-11 08:18:01.586965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea9170 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.586991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.974 [2024-06-11 08:18:01.586999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.974 [2024-06-11 08:18:01.587008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.974 [2024-06-11 08:18:01.587016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.974 [2024-06-11 08:18:01.587025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.974 [2024-06-11 08:18:01.587032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.974 [2024-06-11 08:18:01.587039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.974 [2024-06-11 08:18:01.587047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.974 [2024-06-11 08:18:01.587053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e86260 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.974 [2024-06-11 08:18:01.587082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.974 [2024-06-11 08:18:01.587090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.974 [2024-06-11 08:18:01.587097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.974 [2024-06-11 08:18:01.587105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.974 [2024-06-11 08:18:01.587112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.974 [2024-06-11 08:18:01.587119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.974 [2024-06-11 08:18:01.587126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.974 [2024-06-11 08:18:01.587133] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20441d0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.974 [2024-06-11 08:18:01.587159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.974 [2024-06-11 08:18:01.587166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.974 [2024-06-11 08:18:01.587173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.974 [2024-06-11 08:18:01.587181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.974 [2024-06-11 08:18:01.587188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.974 [2024-06-11 08:18:01.587196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.974 [2024-06-11 08:18:01.587203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.974 [2024-06-11 08:18:01.587210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea9a50 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.974 [2024-06-11 08:18:01.587451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.587568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218cba0 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.975 [2024-06-11 08:18:01.588782] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:30.975 [2024-06-11 08:18:01.588810] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20441d0 (9): Bad file descriptor 00:25:30.975 [2024-06-11 08:18:01.589129] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:30.975 [2024-06-11 08:18:01.590331] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:30.975 [2024-06-11 08:18:01.590372] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:30.975 [2024-06-11 08:18:01.590575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.975 [2024-06-11 08:18:01.590589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.975 [2024-06-11 08:18:01.590602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.975 [2024-06-11 08:18:01.590610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.975 [2024-06-11 08:18:01.590620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.975 [2024-06-11 08:18:01.590627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.975 [2024-06-11 08:18:01.590636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.590643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.590653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.590660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.590670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.590677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.590687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.590694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.590703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.590710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.590719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.590726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.590735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.590742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.590752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.590759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.590771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.590779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.590788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.590795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.590805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.590812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.590822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.590829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.590839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.590846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.590856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.590864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.590873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.590880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.590891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.590899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.590908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.590915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.590924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.590932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.590942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.590949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.590959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.590967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.590976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.590986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.590995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.591003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.591013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.591024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.591034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.591041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.591053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.591060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.591070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.591077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.591086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.591094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.591104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.591111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.591120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.591128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.591137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.591144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.591153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.591160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.591169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.591178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.976 [2024-06-11 08:18:01.591187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.976 [2024-06-11 08:18:01.591194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.977 [2024-06-11 08:18:01.591205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.977 [2024-06-11 08:18:01.591212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.977 [2024-06-11 08:18:01.591222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.977 [2024-06-11 08:18:01.591229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.977 [2024-06-11 08:18:01.591238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.977 [2024-06-11 08:18:01.591246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.977 [2024-06-11 08:18:01.591256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.977 [2024-06-11 08:18:01.591263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.977 [2024-06-11 08:18:01.591272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.977 [2024-06-11 08:18:01.591279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.977 [2024-06-11 08:18:01.591288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.977 [2024-06-11 08:18:01.591295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.977 [2024-06-11 08:18:01.591304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.977 [2024-06-11 08:18:01.591311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.977 [2024-06-11 08:18:01.591320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.977 [2024-06-11 08:18:01.591327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.977 [2024-06-11 08:18:01.591336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.977 [2024-06-11 08:18:01.591343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.977 [2024-06-11 08:18:01.591352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.977 [2024-06-11 08:18:01.603675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.603699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.603707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.603715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.603721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.603727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.603737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.603743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.603749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.603755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.603762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.603767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.603774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.603780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.603786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.603791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d030 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.977 [2024-06-11 08:18:01.604875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.604879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.604884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.604888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.604892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.604897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.604901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.604906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.604910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.604915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.604919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.604924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.604928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.604933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.604937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d4c0 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218d950 is same with the state(5) to be set 00:25:30.978 [2024-06-11 08:18:01.605976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.978 [2024-06-11 08:18:01.606012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.978 [2024-06-11 08:18:01.606021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.978 [2024-06-11 08:18:01.606030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.978 [2024-06-11 08:18:01.606038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.978 [2024-06-11 08:18:01.606047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.978 [2024-06-11 08:18:01.606055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.978 [2024-06-11 08:18:01.606064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.979 [2024-06-11 08:18:01.606071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.979 [2024-06-11 08:18:01.606080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.979 [2024-06-11 08:18:01.606088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.979 [2024-06-11 08:18:01.606097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.979 [2024-06-11 08:18:01.606104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.979 [2024-06-11 08:18:01.606114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.979 [2024-06-11 08:18:01.606121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.979 [2024-06-11 08:18:01.606130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.979 [2024-06-11 08:18:01.606138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.979 [2024-06-11 08:18:01.606147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.979 [2024-06-11 08:18:01.606154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.979 [2024-06-11 08:18:01.606163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.979 [2024-06-11 08:18:01.606170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.979 [2024-06-11 08:18:01.606179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.979 [2024-06-11 08:18:01.606186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.979 [2024-06-11 08:18:01.606196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.979 [2024-06-11 08:18:01.606204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.979 [2024-06-11 08:18:01.606217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.979 [2024-06-11 08:18:01.606225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.979 [2024-06-11 08:18:01.606234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.979 [2024-06-11 08:18:01.606242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.979 [2024-06-11 08:18:01.606251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.979 [2024-06-11 08:18:01.606258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.979 [2024-06-11 08:18:01.606268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.979 [2024-06-11 08:18:01.606276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.979 [2024-06-11 08:18:01.606285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.979 [2024-06-11 08:18:01.606293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.979 [2024-06-11 08:18:01.606302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:30.979 [2024-06-11 08:18:01.606309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.979 [2024-06-11 08:18:01.606318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f25d20 is same with the state(5) to be set 00:25:30.979 [2024-06-11 08:18:01.606375] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f25d20 was disconnected and freed. reset controller. 00:25:31.250 [2024-06-11 08:18:01.606446] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:31.250 [2024-06-11 08:18:01.606641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.250 [2024-06-11 08:18:01.606663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.606676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.250 [2024-06-11 08:18:01.606684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.606692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.250 [2024-06-11 08:18:01.606700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.606707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.250 [2024-06-11 08:18:01.606715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.606722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e80790 is same with the state(5) to be set 00:25:31.250 [2024-06-11 08:18:01.606750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.250 [2024-06-11 08:18:01.606759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.606771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.250 [2024-06-11 08:18:01.606778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.606786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.250 [2024-06-11 08:18:01.606793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.606800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.250 [2024-06-11 08:18:01.606808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.606815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f640 is same with the state(5) to be set 00:25:31.250 [2024-06-11 08:18:01.606838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.250 [2024-06-11 08:18:01.606847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.606855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.250 [2024-06-11 08:18:01.606862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.606869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.250 [2024-06-11 08:18:01.606877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.606885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.250 [2024-06-11 08:18:01.606891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.606898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204bf20 is same with the state(5) to be set 00:25:31.250 [2024-06-11 08:18:01.606915] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e88fc0 (9): Bad file descriptor 00:25:31.250 [2024-06-11 08:18:01.606931] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204caa0 (9): Bad file descriptor 00:25:31.250 [2024-06-11 08:18:01.606945] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea9170 (9): Bad file descriptor 00:25:31.250 [2024-06-11 08:18:01.606958] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e86260 (9): Bad file descriptor 00:25:31.250 [2024-06-11 08:18:01.606973] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea9a50 (9): Bad file descriptor 00:25:31.250 [2024-06-11 08:18:01.606996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.250 [2024-06-11 08:18:01.607006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.607014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.250 [2024-06-11 08:18:01.607020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.607028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.250 [2024-06-11 08:18:01.607038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.607046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.250 [2024-06-11 08:18:01.607053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.607060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f210 is same with the state(5) to be set 00:25:31.250 [2024-06-11 08:18:01.608522] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:31.250 [2024-06-11 08:18:01.608565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:31.250 [2024-06-11 08:18:01.608575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.608582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20441d0 is same with the state(5) to be set 00:25:31.250 [2024-06-11 08:18:01.608847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-06-11 08:18:01.609145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.250 [2024-06-11 08:18:01.609155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea9170 with addr=10.0.0.2, port=4420 00:25:31.250 [2024-06-11 08:18:01.609163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea9170 is same with the state(5) to be set 00:25:31.250 [2024-06-11 08:18:01.609174] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20441d0 (9): Bad file descriptor 00:25:31.250 [2024-06-11 08:18:01.609586] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea9170 (9): Bad file descriptor 00:25:31.250 [2024-06-11 08:18:01.609600] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:31.250 [2024-06-11 08:18:01.609606] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:31.250 [2024-06-11 08:18:01.609615] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:31.250 [2024-06-11 08:18:01.609686] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:31.250 [2024-06-11 08:18:01.609720] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:31.250 [2024-06-11 08:18:01.609754] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:31.250 [2024-06-11 08:18:01.609765] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.250 [2024-06-11 08:18:01.609772] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:31.250 [2024-06-11 08:18:01.609778] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:31.250 [2024-06-11 08:18:01.609786] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:31.250 [2024-06-11 08:18:01.609851] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.250 [2024-06-11 08:18:01.616613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e80790 (9): Bad file descriptor 00:25:31.250 [2024-06-11 08:18:01.616638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5f640 (9): Bad file descriptor 00:25:31.250 [2024-06-11 08:18:01.616656] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204bf20 (9): Bad file descriptor 00:25:31.250 [2024-06-11 08:18:01.616694] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5f210 (9): Bad file descriptor 00:25:31.250 [2024-06-11 08:18:01.616810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.250 [2024-06-11 08:18:01.616821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.616834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.250 [2024-06-11 08:18:01.616842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.616852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.250 [2024-06-11 08:18:01.616859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.616869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.250 [2024-06-11 08:18:01.616876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.616886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.250 [2024-06-11 08:18:01.616893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.616902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.250 [2024-06-11 08:18:01.616910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.616919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.250 [2024-06-11 08:18:01.616927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.616936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.250 [2024-06-11 08:18:01.616944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.250 [2024-06-11 08:18:01.616953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.616961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.616970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.616978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.616987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.616995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.251 [2024-06-11 08:18:01.617619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.251 [2024-06-11 08:18:01.617626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.617636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.617643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.617653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.617660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.617674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.617682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.617691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.617699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.617708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.617715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.617725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.617732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.617741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.617748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.617758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.617765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.617775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.617784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.617794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.617801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.617811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.617819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.617828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.617836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.617845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.617852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.617862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.617869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.617879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.617887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.617897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.617905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.617913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21df0 is same with the state(5) to be set 00:25:31.252 [2024-06-11 08:18:01.619235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.252 [2024-06-11 08:18:01.619633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.252 [2024-06-11 08:18:01.619642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.619652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.619661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.619669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.619678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.619685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.619694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.619702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.619711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.619719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.619728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.619735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.619744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.619752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.619761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.619769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.619778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.619786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.619795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.619803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.619812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.619820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.619829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.619836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.619845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.619853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.619864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.619872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.619881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.619888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.619897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.619905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.619914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.619921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.619930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.619938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.619947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.619954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.619963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.619971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.619980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.619987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.619996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.620004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.620013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.620020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.620029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.620037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.620046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.620054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.620063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.620072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.620081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.620088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.620097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.620105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.620114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.620121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.620131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.620139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.620148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.620156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.620165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.620172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.620181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.620188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.620197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.620204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.620214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.620221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.620230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.620237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.620247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.620254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.620264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.620272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.620282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.253 [2024-06-11 08:18:01.620290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.253 [2024-06-11 08:18:01.620299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.620307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.620316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.620323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.620332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23220 is same with the state(5) to be set 00:25:31.254 [2024-06-11 08:18:01.621571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.621992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.621999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.622009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.622017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.622026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.622034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.622043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.622051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.622060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.622068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.622077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.622086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.622095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.254 [2024-06-11 08:18:01.622102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.254 [2024-06-11 08:18:01.622112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.622686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.622694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f247a0 is same with the state(5) to be set 00:25:31.255 [2024-06-11 08:18:01.623945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.623960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.623972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.623982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.623993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.624002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.624013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.624022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.624033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.624042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.255 [2024-06-11 08:18:01.624053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.255 [2024-06-11 08:18:01.624062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.256 [2024-06-11 08:18:01.624741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.256 [2024-06-11 08:18:01.624750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.257 [2024-06-11 08:18:01.624758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.257 [2024-06-11 08:18:01.624767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.257 [2024-06-11 08:18:01.624775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.257 [2024-06-11 08:18:01.624785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.257 [2024-06-11 08:18:01.624793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.257 [2024-06-11 08:18:01.624802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.257 [2024-06-11 08:18:01.624809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.257 [2024-06-11 08:18:01.624818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.257 [2024-06-11 08:18:01.624826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.257 [2024-06-11 08:18:01.624835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.257 [2024-06-11 08:18:01.624843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.257 [2024-06-11 08:18:01.624852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.257 [2024-06-11 08:18:01.624860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.257 [2024-06-11 08:18:01.624869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.257 [2024-06-11 08:18:01.624877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.257 [2024-06-11 08:18:01.624886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.257 [2024-06-11 08:18:01.624894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.257 [2024-06-11 08:18:01.624904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.257 [2024-06-11 08:18:01.624912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.257 [2024-06-11 08:18:01.624921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.257 [2024-06-11 08:18:01.624929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.257 [2024-06-11 08:18:01.624939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.257 [2024-06-11 08:18:01.624947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.257 [2024-06-11 08:18:01.624957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.257 [2024-06-11 08:18:01.624965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.257 [2024-06-11 08:18:01.624974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.257 [2024-06-11 08:18:01.624982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.257 [2024-06-11 08:18:01.624992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.257 [2024-06-11 08:18:01.624999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.257 [2024-06-11 08:18:01.625008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.257 [2024-06-11 08:18:01.625016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.257 [2024-06-11 08:18:01.625025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.257 [2024-06-11 08:18:01.625032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.257 [2024-06-11 08:18:01.625042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.257 [2024-06-11 08:18:01.625049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.257 [2024-06-11 08:18:01.625059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.257 [2024-06-11 08:18:01.625066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.257 [2024-06-11 08:18:01.625074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f26b60 is same with the state(5) to be set 00:25:31.257 [2024-06-11 08:18:01.626402] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:31.257 [2024-06-11 08:18:01.626417] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.257 [2024-06-11 08:18:01.626427] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:31.257 [2024-06-11 08:18:01.626436] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:31.257 [2024-06-11 08:18:01.626547] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:31.257 [2024-06-11 08:18:01.626954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-06-11 08:18:01.627130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-06-11 08:18:01.627140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20441d0 with addr=10.0.0.2, port=4420 00:25:31.257 [2024-06-11 08:18:01.627149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20441d0 is same with the state(5) to be set 00:25:31.257 [2024-06-11 08:18:01.627650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-06-11 08:18:01.628039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-06-11 08:18:01.628052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e86260 with addr=10.0.0.2, port=4420 00:25:31.257 [2024-06-11 08:18:01.628062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e86260 is same with the state(5) to be set 00:25:31.257 [2024-06-11 08:18:01.628399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-06-11 08:18:01.628779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-06-11 08:18:01.628789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204caa0 with addr=10.0.0.2, port=4420 00:25:31.257 [2024-06-11 08:18:01.628796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204caa0 is same with the state(5) to be set 00:25:31.257 [2024-06-11 08:18:01.629038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-06-11 08:18:01.629253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-06-11 08:18:01.629262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e88fc0 with addr=10.0.0.2, port=4420 00:25:31.257 [2024-06-11 08:18:01.629269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e88fc0 is same with the state(5) to be set 00:25:31.257 [2024-06-11 08:18:01.630376] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:31.257 [2024-06-11 08:18:01.630573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-06-11 08:18:01.630957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-06-11 08:18:01.630966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea9a50 with addr=10.0.0.2, port=4420 00:25:31.257 [2024-06-11 08:18:01.630974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea9a50 is same with the state(5) to be set 00:25:31.257 [2024-06-11 08:18:01.630984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20441d0 (9): Bad file descriptor 00:25:31.257 [2024-06-11 08:18:01.630995] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e86260 (9): Bad file descriptor 00:25:31.257 [2024-06-11 08:18:01.631004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204caa0 (9): Bad file descriptor 00:25:31.257 [2024-06-11 08:18:01.631012] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e88fc0 (9): Bad file descriptor 00:25:31.257 [2024-06-11 08:18:01.631431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-06-11 08:18:01.631738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.257 [2024-06-11 08:18:01.631747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea9170 with addr=10.0.0.2, port=4420 00:25:31.257 [2024-06-11 08:18:01.631755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea9170 is same with the state(5) to be set 00:25:31.257 [2024-06-11 08:18:01.631763] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea9a50 (9): Bad file descriptor 00:25:31.257 [2024-06-11 08:18:01.631772] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:31.257 [2024-06-11 08:18:01.631778] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:31.257 [2024-06-11 08:18:01.631786] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:31.257 [2024-06-11 08:18:01.631798] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:31.257 [2024-06-11 08:18:01.631805] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:31.257 [2024-06-11 08:18:01.631811] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:31.257 [2024-06-11 08:18:01.631822] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:31.257 [2024-06-11 08:18:01.631828] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:31.257 [2024-06-11 08:18:01.631838] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:31.257 [2024-06-11 08:18:01.631848] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:31.257 [2024-06-11 08:18:01.631854] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:31.257 [2024-06-11 08:18:01.631861] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:31.257 [2024-06-11 08:18:01.631930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.631942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.631958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.631965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.631975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.631982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.631991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.631999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.258 [2024-06-11 08:18:01.632518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.258 [2024-06-11 08:18:01.632527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.632985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.632994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.633001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.633010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe38d0 is same with the state(5) to be set 00:25:31.259 [2024-06-11 08:18:01.634253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.634266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.634278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.634287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.634298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.634307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.634318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.634327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.634338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.634347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.634357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.634364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.634374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.634381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.634390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.634398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.634407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.634415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.259 [2024-06-11 08:18:01.634427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.259 [2024-06-11 08:18:01.634434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.634985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.634994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.635002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.635011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.635018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.635027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.635034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.635043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.635050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.635060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.635067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.635076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.635084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.260 [2024-06-11 08:18:01.635092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.260 [2024-06-11 08:18:01.635100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.635108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.635115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.635125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.635132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.635141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.635148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.635157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.635164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.635173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.635180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.635189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.635196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.635206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.635214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.635223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.635230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.635240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.635246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.635256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.635265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.635275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.635282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.635291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.635299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.635308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.635315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.635324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.635331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.635340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4eb0 is same with the state(5) to be set 00:25:31.261 [2024-06-11 08:18:01.636573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.636987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.636996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.261 [2024-06-11 08:18:01.637004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.261 [2024-06-11 08:18:01.637013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.637650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.262 [2024-06-11 08:18:01.637658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe6490 is same with the state(5) to be set 00:25:31.262 [2024-06-11 08:18:01.638889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.262 [2024-06-11 08:18:01.638900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.638912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.638921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.638932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.638941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.638952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.638960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.638970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.638977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.638986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.638996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.263 [2024-06-11 08:18:01.639479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.263 [2024-06-11 08:18:01.639489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.264 [2024-06-11 08:18:01.639956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.264 [2024-06-11 08:18:01.639964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe7650 is same with the state(5) to be set 00:25:31.264 [2024-06-11 08:18:01.641178] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.264 [2024-06-11 08:18:01.641190] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.264 [2024-06-11 08:18:01.641197] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.264 [2024-06-11 08:18:01.641204] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.264 [2024-06-11 08:18:01.641213] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:31.264 [2024-06-11 08:18:01.641226] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:31.264 [2024-06-11 08:18:01.641253] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea9170 (9): Bad file descriptor 00:25:31.264 [2024-06-11 08:18:01.641262] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:31.264 [2024-06-11 08:18:01.641269] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:31.264 [2024-06-11 08:18:01.641279] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:31.264 [2024-06-11 08:18:01.641326] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:31.264 [2024-06-11 08:18:01.641339] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:31.264 [2024-06-11 08:18:01.641353] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:31.264 [2024-06-11 08:18:01.641363] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:31.264 [2024-06-11 08:18:01.641415] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:31.264 task offset: 29184 on job bdev=Nvme10n1 fails 00:25:31.264 00:25:31.264 Latency(us) 00:25:31.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.264 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:31.264 Job: Nvme1n1 ended in about 0.61 seconds with error 00:25:31.264 Verification LBA range: start 0x0 length 0x400 00:25:31.264 Nvme1n1 : 0.61 339.42 21.21 104.44 0.00 142985.74 83449.17 124955.31 00:25:31.264 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:31.264 Job: Nvme2n1 ended in about 0.62 seconds with error 00:25:31.264 Verification LBA range: start 0x0 length 0x400 00:25:31.264 Nvme2n1 : 0.62 338.08 21.13 104.03 0.00 141671.81 67720.53 133693.44 00:25:31.264 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:31.264 Job: Nvme3n1 ended in about 0.62 seconds with error 00:25:31.264 Verification LBA range: start 0x0 length 0x400 00:25:31.264 Nvme3n1 : 0.62 336.80 21.05 103.63 0.00 140317.72 80390.83 122333.87 00:25:31.264 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:31.264 Job: Nvme4n1 ended in about 0.60 seconds with error 00:25:31.265 Verification LBA range: start 0x0 length 0x400 00:25:31.265 Nvme4n1 : 0.60 345.49 21.59 106.31 0.00 134819.44 78643.20 107915.95 00:25:31.265 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:31.265 Job: Nvme5n1 ended in about 0.62 seconds with error 00:25:31.265 Verification LBA range: start 0x0 length 0x400 00:25:31.265 Nvme5n1 : 0.62 335.51 20.97 103.23 0.00 137170.22 79080.11 116217.17 00:25:31.265 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:31.265 Job: Nvme6n1 ended in about 0.63 seconds with error 00:25:31.265 Verification LBA range: start 0x0 length 0x400 00:25:31.265 Nvme6n1 : 0.63 334.46 20.90 101.93 0.00 136164.99 35607.89 114469.55 00:25:31.265 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:31.265 Job: Nvme7n1 ended in about 0.63 seconds with error 00:25:31.265 Verification LBA range: start 0x0 length 0x400 00:25:31.265 Nvme7n1 : 0.63 330.05 20.63 101.55 0.00 135804.08 76895.57 107479.04 00:25:31.265 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:31.265 Job: Nvme8n1 ended in about 0.63 seconds with error 00:25:31.265 Verification LBA range: start 0x0 length 0x400 00:25:31.265 Nvme8n1 : 0.63 328.85 20.55 101.18 0.00 134464.45 76021.76 107479.04 00:25:31.265 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:31.265 Job: Nvme9n1 ended in about 0.63 seconds with error 00:25:31.265 Verification LBA range: start 0x0 length 0x400 00:25:31.265 Nvme9n1 : 0.63 327.66 20.48 100.82 0.00 133100.72 66409.81 112721.92 00:25:31.265 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:31.265 Job: Nvme10n1 ended in about 0.58 seconds with error 00:25:31.265 Verification LBA range: start 0x0 length 0x400 00:25:31.265 Nvme10n1 : 0.58 360.53 22.53 109.87 0.00 118111.62 2976.43 113595.73 00:25:31.265 =================================================================================================================== 00:25:31.265 Total : 3376.84 211.05 1036.99 0.00 135448.86 2976.43 133693.44 00:25:31.265 [2024-06-11 08:18:01.668452] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:31.265 [2024-06-11 08:18:01.668501] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:31.265 [2024-06-11 08:18:01.668518] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.265 [2024-06-11 08:18:01.668928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-06-11 08:18:01.669264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-06-11 08:18:01.669275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204bf20 with addr=10.0.0.2, port=4420 00:25:31.265 [2024-06-11 08:18:01.669285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204bf20 is same with the state(5) to be set 00:25:31.265 [2024-06-11 08:18:01.669705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-06-11 08:18:01.669897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-06-11 08:18:01.669906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5f640 with addr=10.0.0.2, port=4420 00:25:31.265 [2024-06-11 08:18:01.669913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f640 is same with the state(5) to be set 00:25:31.265 [2024-06-11 08:18:01.669921] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:31.265 [2024-06-11 08:18:01.669928] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:31.265 [2024-06-11 08:18:01.669936] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:31.265 [2024-06-11 08:18:01.671171] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:31.265 [2024-06-11 08:18:01.671187] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:31.265 [2024-06-11 08:18:01.671197] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.265 [2024-06-11 08:18:01.671206] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.265 [2024-06-11 08:18:01.671581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-06-11 08:18:01.671933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-06-11 08:18:01.671943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5f210 with addr=10.0.0.2, port=4420 00:25:31.265 [2024-06-11 08:18:01.671951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f210 is same with the state(5) to be set 00:25:31.265 [2024-06-11 08:18:01.672135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-06-11 08:18:01.672518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-06-11 08:18:01.672528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e80790 with addr=10.0.0.2, port=4420 00:25:31.265 [2024-06-11 08:18:01.672535] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e80790 is same with the state(5) to be set 00:25:31.265 [2024-06-11 08:18:01.672547] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204bf20 (9): Bad file descriptor 00:25:31.265 [2024-06-11 08:18:01.672558] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5f640 (9): Bad file descriptor 00:25:31.265 [2024-06-11 08:18:01.672592] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:31.265 [2024-06-11 08:18:01.672617] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:31.265 [2024-06-11 08:18:01.672627] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:31.265 [2024-06-11 08:18:01.672683] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:31.265 [2024-06-11 08:18:01.673030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-06-11 08:18:01.673325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-06-11 08:18:01.673334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e88fc0 with addr=10.0.0.2, port=4420 00:25:31.265 [2024-06-11 08:18:01.673341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e88fc0 is same with the state(5) to be set 00:25:31.265 [2024-06-11 08:18:01.673611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-06-11 08:18:01.673996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-06-11 08:18:01.674006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x204caa0 with addr=10.0.0.2, port=4420 00:25:31.265 [2024-06-11 08:18:01.674013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204caa0 is same with the state(5) to be set 00:25:31.265 [2024-06-11 08:18:01.674307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-06-11 08:18:01.674608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-06-11 08:18:01.674620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e86260 with addr=10.0.0.2, port=4420 00:25:31.265 [2024-06-11 08:18:01.674627] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e86260 is same with the state(5) to be set 00:25:31.265 [2024-06-11 08:18:01.674635] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5f210 (9): Bad file descriptor 00:25:31.265 [2024-06-11 08:18:01.674645] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e80790 (9): Bad file descriptor 00:25:31.265 [2024-06-11 08:18:01.674653] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:31.265 [2024-06-11 08:18:01.674660] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:31.265 [2024-06-11 08:18:01.674667] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:31.265 [2024-06-11 08:18:01.674678] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:31.265 [2024-06-11 08:18:01.674684] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:31.265 [2024-06-11 08:18:01.674691] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:31.265 [2024-06-11 08:18:01.674748] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:31.265 [2024-06-11 08:18:01.674760] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:31.265 [2024-06-11 08:18:01.674769] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.265 [2024-06-11 08:18:01.674776] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.265 [2024-06-11 08:18:01.675100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-06-11 08:18:01.675385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-06-11 08:18:01.675394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20441d0 with addr=10.0.0.2, port=4420 00:25:31.265 [2024-06-11 08:18:01.675401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20441d0 is same with the state(5) to be set 00:25:31.265 [2024-06-11 08:18:01.675410] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e88fc0 (9): Bad file descriptor 00:25:31.265 [2024-06-11 08:18:01.675420] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204caa0 (9): Bad file descriptor 00:25:31.265 [2024-06-11 08:18:01.675432] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e86260 (9): Bad file descriptor 00:25:31.265 [2024-06-11 08:18:01.675445] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:31.265 [2024-06-11 08:18:01.675451] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:31.265 [2024-06-11 08:18:01.675458] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:31.265 [2024-06-11 08:18:01.675466] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:31.265 [2024-06-11 08:18:01.675473] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:31.265 [2024-06-11 08:18:01.675480] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:31.265 [2024-06-11 08:18:01.675508] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.265 [2024-06-11 08:18:01.675514] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.265 [2024-06-11 08:18:01.675683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-06-11 08:18:01.675859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.265 [2024-06-11 08:18:01.675869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea9a50 with addr=10.0.0.2, port=4420 00:25:31.265 [2024-06-11 08:18:01.675879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea9a50 is same with the state(5) to be set 00:25:31.265 [2024-06-11 08:18:01.676195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-06-11 08:18:01.676535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.266 [2024-06-11 08:18:01.676545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ea9170 with addr=10.0.0.2, port=4420 00:25:31.266 [2024-06-11 08:18:01.676553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea9170 is same with the state(5) to be set 00:25:31.266 [2024-06-11 08:18:01.676563] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20441d0 (9): Bad file descriptor 00:25:31.266 [2024-06-11 08:18:01.676571] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:31.266 [2024-06-11 08:18:01.676577] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:31.266 [2024-06-11 08:18:01.676583] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:31.266 [2024-06-11 08:18:01.676594] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:31.266 [2024-06-11 08:18:01.676600] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:31.266 [2024-06-11 08:18:01.676607] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:31.266 [2024-06-11 08:18:01.676617] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:31.266 [2024-06-11 08:18:01.676624] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:31.266 [2024-06-11 08:18:01.676631] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:31.266 [2024-06-11 08:18:01.676671] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.266 [2024-06-11 08:18:01.676680] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.266 [2024-06-11 08:18:01.676686] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.266 [2024-06-11 08:18:01.676693] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea9a50 (9): Bad file descriptor 00:25:31.266 [2024-06-11 08:18:01.676705] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea9170 (9): Bad file descriptor 00:25:31.266 [2024-06-11 08:18:01.676714] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:31.266 [2024-06-11 08:18:01.676721] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:31.266 [2024-06-11 08:18:01.676728] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:31.266 [2024-06-11 08:18:01.676756] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.266 [2024-06-11 08:18:01.676763] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:31.266 [2024-06-11 08:18:01.676769] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:31.266 [2024-06-11 08:18:01.676776] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:31.266 [2024-06-11 08:18:01.676785] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:31.266 [2024-06-11 08:18:01.676792] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:31.266 [2024-06-11 08:18:01.676798] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:31.266 [2024-06-11 08:18:01.676826] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.266 [2024-06-11 08:18:01.676833] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.266 08:18:01 -- target/shutdown.sh@135 -- # nvmfpid= 00:25:31.266 08:18:01 -- target/shutdown.sh@138 -- # sleep 1 00:25:32.651 08:18:02 -- target/shutdown.sh@141 -- # kill -9 1163139 00:25:32.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (1163139) - No such process 00:25:32.651 08:18:02 -- target/shutdown.sh@141 -- # true 00:25:32.651 08:18:02 -- target/shutdown.sh@143 -- # stoptarget 00:25:32.651 08:18:02 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:32.651 08:18:02 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:32.651 08:18:02 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:32.651 08:18:02 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:32.651 08:18:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:32.651 08:18:02 -- nvmf/common.sh@116 -- # sync 00:25:32.651 08:18:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:32.651 08:18:02 -- nvmf/common.sh@119 -- # set +e 00:25:32.651 08:18:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:32.651 08:18:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:32.651 rmmod nvme_tcp 00:25:32.651 rmmod nvme_fabrics 00:25:32.651 rmmod nvme_keyring 00:25:32.651 08:18:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:32.651 08:18:02 -- nvmf/common.sh@123 -- # set -e 00:25:32.651 08:18:02 -- nvmf/common.sh@124 -- # return 0 00:25:32.651 08:18:02 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:25:32.651 08:18:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:32.651 08:18:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:32.651 08:18:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:32.651 08:18:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:32.651 08:18:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:32.651 08:18:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.651 08:18:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.651 08:18:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.568 08:18:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:34.568 00:25:34.568 real 0m7.125s 00:25:34.568 user 0m15.898s 00:25:34.568 sys 0m1.162s 00:25:34.568 08:18:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:34.568 08:18:05 -- common/autotest_common.sh@10 -- # set +x 00:25:34.568 ************************************ 00:25:34.568 END TEST nvmf_shutdown_tc3 00:25:34.568 ************************************ 00:25:34.568 08:18:05 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:25:34.568 00:25:34.568 real 0m31.123s 00:25:34.568 user 1m11.308s 00:25:34.568 sys 0m8.984s 00:25:34.568 08:18:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:34.568 08:18:05 -- common/autotest_common.sh@10 -- # set +x 00:25:34.568 ************************************ 00:25:34.568 END TEST nvmf_shutdown 00:25:34.568 ************************************ 00:25:34.568 08:18:05 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:25:34.568 08:18:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:34.568 08:18:05 -- common/autotest_common.sh@10 -- # set +x 00:25:34.568 08:18:05 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:25:34.568 08:18:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:34.568 08:18:05 -- common/autotest_common.sh@10 -- # set +x 00:25:34.568 08:18:05 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:25:34.568 08:18:05 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:34.568 08:18:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:34.568 08:18:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:34.568 08:18:05 -- common/autotest_common.sh@10 -- # set +x 00:25:34.568 ************************************ 00:25:34.568 START TEST nvmf_multicontroller 00:25:34.568 ************************************ 00:25:34.568 08:18:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:34.830 * Looking for test storage... 00:25:34.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:34.830 08:18:05 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.830 08:18:05 -- nvmf/common.sh@7 -- # uname -s 00:25:34.830 08:18:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.830 08:18:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.830 08:18:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.830 08:18:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.830 08:18:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.830 08:18:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.830 08:18:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.830 08:18:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.830 08:18:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.830 08:18:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.830 08:18:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:34.830 08:18:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:34.830 08:18:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.830 08:18:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.830 08:18:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:34.830 08:18:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:34.830 08:18:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.830 08:18:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.830 08:18:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.830 08:18:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.830 08:18:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.830 08:18:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.830 08:18:05 -- paths/export.sh@5 -- # export PATH 00:25:34.830 08:18:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.830 08:18:05 -- nvmf/common.sh@46 -- # : 0 00:25:34.830 08:18:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:34.830 08:18:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:34.830 08:18:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:34.830 08:18:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.830 08:18:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.830 08:18:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:34.830 08:18:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:34.830 08:18:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:34.830 08:18:05 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:34.830 08:18:05 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:34.830 08:18:05 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:34.830 08:18:05 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:34.830 08:18:05 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:34.830 08:18:05 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:34.830 08:18:05 -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:34.830 08:18:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:34.830 08:18:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.830 08:18:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:34.830 08:18:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:34.830 08:18:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:34.830 08:18:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.830 08:18:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:34.830 08:18:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.830 08:18:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:34.830 08:18:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:34.830 08:18:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:34.830 08:18:05 -- common/autotest_common.sh@10 -- # set +x 00:25:42.970 08:18:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:42.971 08:18:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:42.971 08:18:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:42.971 08:18:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:42.971 08:18:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:42.971 08:18:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:42.971 08:18:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:42.971 08:18:12 -- nvmf/common.sh@294 -- # net_devs=() 00:25:42.971 08:18:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:42.971 08:18:12 -- nvmf/common.sh@295 -- # e810=() 00:25:42.971 08:18:12 -- nvmf/common.sh@295 -- # local -ga e810 00:25:42.971 08:18:12 -- nvmf/common.sh@296 -- # x722=() 00:25:42.971 08:18:12 -- nvmf/common.sh@296 -- # local -ga x722 00:25:42.971 08:18:12 -- nvmf/common.sh@297 -- # mlx=() 00:25:42.971 08:18:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:42.971 08:18:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:42.971 08:18:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:42.971 08:18:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:42.971 08:18:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:42.971 08:18:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:42.971 08:18:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:42.971 08:18:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:42.971 08:18:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:42.971 08:18:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:42.971 08:18:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:42.971 08:18:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:42.971 08:18:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:42.971 08:18:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:42.971 08:18:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:42.971 08:18:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:42.971 08:18:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:42.971 08:18:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:42.971 08:18:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:42.971 08:18:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:42.971 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:42.971 08:18:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:42.971 08:18:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:42.971 08:18:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.971 08:18:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.971 08:18:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:42.971 08:18:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:42.971 08:18:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:42.971 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:42.971 08:18:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:42.971 08:18:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:42.971 08:18:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.971 08:18:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.971 08:18:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:42.971 08:18:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:42.971 08:18:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:42.971 08:18:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:42.971 08:18:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:42.971 08:18:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.971 08:18:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:42.971 08:18:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.971 08:18:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:42.971 Found net devices under 0000:31:00.0: cvl_0_0 00:25:42.971 08:18:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.971 08:18:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:42.971 08:18:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.971 08:18:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:42.971 08:18:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.971 08:18:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:42.971 Found net devices under 0000:31:00.1: cvl_0_1 00:25:42.971 08:18:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.971 08:18:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:42.971 08:18:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:42.971 08:18:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:42.971 08:18:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:42.971 08:18:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:42.971 08:18:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:42.971 08:18:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:42.971 08:18:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:42.971 08:18:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:42.971 08:18:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:42.971 08:18:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:42.971 08:18:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:42.971 08:18:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:42.971 08:18:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:42.971 08:18:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:42.971 08:18:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:42.971 08:18:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:42.971 08:18:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:42.971 08:18:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:42.971 08:18:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:42.971 08:18:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:42.971 08:18:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:42.971 08:18:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:42.971 08:18:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:42.971 08:18:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:42.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:42.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:25:42.971 00:25:42.971 --- 10.0.0.2 ping statistics --- 00:25:42.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.971 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:25:42.971 08:18:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:42.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:42.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:25:42.971 00:25:42.971 --- 10.0.0.1 ping statistics --- 00:25:42.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.971 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:25:42.971 08:18:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:42.971 08:18:12 -- nvmf/common.sh@410 -- # return 0 00:25:42.971 08:18:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:42.971 08:18:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:42.971 08:18:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:42.971 08:18:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:42.971 08:18:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:42.971 08:18:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:42.971 08:18:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:42.971 08:18:12 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:42.971 08:18:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:42.971 08:18:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:42.971 08:18:12 -- common/autotest_common.sh@10 -- # set +x 00:25:42.971 08:18:12 -- nvmf/common.sh@469 -- # nvmfpid=1168552 00:25:42.971 08:18:12 -- nvmf/common.sh@470 -- # waitforlisten 1168552 00:25:42.971 08:18:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:42.971 08:18:12 -- common/autotest_common.sh@819 -- # '[' -z 1168552 ']' 00:25:42.971 08:18:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.971 08:18:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:42.971 08:18:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.971 08:18:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:42.971 08:18:12 -- common/autotest_common.sh@10 -- # set +x 00:25:42.971 [2024-06-11 08:18:12.695829] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:42.971 [2024-06-11 08:18:12.695888] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.971 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.971 [2024-06-11 08:18:12.784946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:42.971 [2024-06-11 08:18:12.875938] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:42.971 [2024-06-11 08:18:12.876105] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.971 [2024-06-11 08:18:12.876117] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.971 [2024-06-11 08:18:12.876124] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.971 [2024-06-11 08:18:12.876277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:42.971 [2024-06-11 08:18:12.876454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.971 [2024-06-11 08:18:12.876464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:42.971 08:18:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:42.971 08:18:13 -- common/autotest_common.sh@852 -- # return 0 00:25:42.971 08:18:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:42.971 08:18:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:42.971 08:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:42.971 08:18:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.971 08:18:13 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:42.971 08:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:42.971 08:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:42.971 [2024-06-11 08:18:13.506309] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.971 08:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:42.971 08:18:13 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:42.971 08:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:42.971 08:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:42.971 Malloc0 00:25:42.971 08:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:42.971 08:18:13 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:42.971 08:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:42.971 08:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:42.971 08:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:42.971 08:18:13 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:42.971 08:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:42.971 08:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:42.971 08:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:42.971 08:18:13 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:42.971 08:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:42.971 08:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:42.971 [2024-06-11 08:18:13.570830] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.971 08:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:42.971 08:18:13 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:42.971 08:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:42.971 08:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:42.971 [2024-06-11 08:18:13.582794] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:42.971 08:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:42.971 08:18:13 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:42.971 08:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:42.971 08:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:42.971 Malloc1 00:25:42.971 08:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:42.971 08:18:13 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:42.971 08:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:42.971 08:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:43.231 08:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.231 08:18:13 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:43.231 08:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.231 08:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:43.231 08:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.231 08:18:13 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:43.231 08:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.231 08:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:43.231 08:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.231 08:18:13 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:43.231 08:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.231 08:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:43.231 08:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.231 08:18:13 -- host/multicontroller.sh@44 -- # bdevperf_pid=1168867 00:25:43.231 08:18:13 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:43.231 08:18:13 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:43.231 08:18:13 -- host/multicontroller.sh@47 -- # waitforlisten 1168867 /var/tmp/bdevperf.sock 00:25:43.231 08:18:13 -- common/autotest_common.sh@819 -- # '[' -z 1168867 ']' 00:25:43.231 08:18:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:43.231 08:18:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:43.231 08:18:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:43.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:43.231 08:18:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:43.231 08:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:44.173 08:18:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:44.173 08:18:14 -- common/autotest_common.sh@852 -- # return 0 00:25:44.173 08:18:14 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:44.173 08:18:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:44.173 08:18:14 -- common/autotest_common.sh@10 -- # set +x 00:25:44.173 NVMe0n1 00:25:44.173 08:18:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:44.173 08:18:14 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:44.173 08:18:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:44.173 08:18:14 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:44.173 08:18:14 -- common/autotest_common.sh@10 -- # set +x 00:25:44.173 08:18:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:44.173 1 00:25:44.173 08:18:14 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:44.173 08:18:14 -- common/autotest_common.sh@640 -- # local es=0 00:25:44.173 08:18:14 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:44.173 08:18:14 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:44.173 08:18:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:44.173 08:18:14 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:44.173 08:18:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:44.173 08:18:14 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:44.173 08:18:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:44.173 08:18:14 -- common/autotest_common.sh@10 -- # set +x 00:25:44.173 request: 00:25:44.173 { 00:25:44.173 "name": "NVMe0", 00:25:44.173 "trtype": "tcp", 00:25:44.173 "traddr": "10.0.0.2", 00:25:44.173 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:44.173 "hostaddr": "10.0.0.2", 00:25:44.173 "hostsvcid": "60000", 00:25:44.173 "adrfam": "ipv4", 00:25:44.173 "trsvcid": "4420", 00:25:44.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:44.173 "method": "bdev_nvme_attach_controller", 00:25:44.173 "req_id": 1 00:25:44.173 } 00:25:44.173 Got JSON-RPC error response 00:25:44.173 response: 00:25:44.173 { 00:25:44.173 "code": -114, 00:25:44.173 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:44.173 } 00:25:44.173 08:18:14 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:44.173 08:18:14 -- common/autotest_common.sh@643 -- # es=1 00:25:44.173 08:18:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:44.173 08:18:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:44.173 08:18:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:44.173 08:18:14 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:44.173 08:18:14 -- common/autotest_common.sh@640 -- # local es=0 00:25:44.173 08:18:14 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:44.173 08:18:14 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:44.173 08:18:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:44.173 08:18:14 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:44.173 08:18:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:44.173 08:18:14 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:44.173 08:18:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:44.173 08:18:14 -- common/autotest_common.sh@10 -- # set +x 00:25:44.173 request: 00:25:44.173 { 00:25:44.173 "name": "NVMe0", 00:25:44.173 "trtype": "tcp", 00:25:44.173 "traddr": "10.0.0.2", 00:25:44.173 "hostaddr": "10.0.0.2", 00:25:44.173 "hostsvcid": "60000", 00:25:44.173 "adrfam": "ipv4", 00:25:44.173 "trsvcid": "4420", 00:25:44.173 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:44.173 "method": "bdev_nvme_attach_controller", 00:25:44.173 "req_id": 1 00:25:44.173 } 00:25:44.173 Got JSON-RPC error response 00:25:44.173 response: 00:25:44.173 { 00:25:44.173 "code": -114, 00:25:44.173 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:44.173 } 00:25:44.173 08:18:14 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:44.173 08:18:14 -- common/autotest_common.sh@643 -- # es=1 00:25:44.173 08:18:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:44.173 08:18:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:44.173 08:18:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:44.173 08:18:14 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:44.173 08:18:14 -- common/autotest_common.sh@640 -- # local es=0 00:25:44.173 08:18:14 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:44.173 08:18:14 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:44.173 08:18:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:44.173 08:18:14 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:44.173 08:18:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:44.173 08:18:14 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:44.173 08:18:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:44.173 08:18:14 -- common/autotest_common.sh@10 -- # set +x 00:25:44.173 request: 00:25:44.173 { 00:25:44.173 "name": "NVMe0", 00:25:44.173 "trtype": "tcp", 00:25:44.173 "traddr": "10.0.0.2", 00:25:44.173 "hostaddr": "10.0.0.2", 00:25:44.173 "hostsvcid": "60000", 00:25:44.173 "adrfam": "ipv4", 00:25:44.173 "trsvcid": "4420", 00:25:44.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:44.173 "multipath": "disable", 00:25:44.173 "method": "bdev_nvme_attach_controller", 00:25:44.173 "req_id": 1 00:25:44.173 } 00:25:44.173 Got JSON-RPC error response 00:25:44.173 response: 00:25:44.173 { 00:25:44.173 "code": -114, 00:25:44.173 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:25:44.173 } 00:25:44.173 08:18:14 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:44.173 08:18:14 -- common/autotest_common.sh@643 -- # es=1 00:25:44.173 08:18:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:44.173 08:18:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:44.173 08:18:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:44.173 08:18:14 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:44.173 08:18:14 -- common/autotest_common.sh@640 -- # local es=0 00:25:44.173 08:18:14 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:44.173 08:18:14 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:44.173 08:18:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:44.173 08:18:14 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:44.173 08:18:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:44.173 08:18:14 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:44.173 08:18:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:44.173 08:18:14 -- common/autotest_common.sh@10 -- # set +x 00:25:44.173 request: 00:25:44.173 { 00:25:44.173 "name": "NVMe0", 00:25:44.173 "trtype": "tcp", 00:25:44.173 "traddr": "10.0.0.2", 00:25:44.173 "hostaddr": "10.0.0.2", 00:25:44.173 "hostsvcid": "60000", 00:25:44.173 "adrfam": "ipv4", 00:25:44.173 "trsvcid": "4420", 00:25:44.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:44.173 "multipath": "failover", 00:25:44.173 "method": "bdev_nvme_attach_controller", 00:25:44.173 "req_id": 1 00:25:44.173 } 00:25:44.173 Got JSON-RPC error response 00:25:44.173 response: 00:25:44.173 { 00:25:44.173 "code": -114, 00:25:44.173 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:44.173 } 00:25:44.173 08:18:14 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:44.173 08:18:14 -- common/autotest_common.sh@643 -- # es=1 00:25:44.173 08:18:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:44.173 08:18:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:44.173 08:18:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:44.173 08:18:14 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:44.173 08:18:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:44.173 08:18:14 -- common/autotest_common.sh@10 -- # set +x 00:25:44.173 00:25:44.173 08:18:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:44.173 08:18:14 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:44.173 08:18:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:44.173 08:18:14 -- common/autotest_common.sh@10 -- # set +x 00:25:44.173 08:18:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:44.173 08:18:14 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:44.173 08:18:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:44.173 08:18:14 -- common/autotest_common.sh@10 -- # set +x 00:25:44.434 00:25:44.434 08:18:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:44.434 08:18:14 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:44.434 08:18:14 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:44.434 08:18:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:44.434 08:18:14 -- common/autotest_common.sh@10 -- # set +x 00:25:44.434 08:18:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:44.434 08:18:14 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:44.434 08:18:14 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:45.376 0 00:25:45.376 08:18:15 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:45.376 08:18:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.376 08:18:15 -- common/autotest_common.sh@10 -- # set +x 00:25:45.376 08:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.376 08:18:16 -- host/multicontroller.sh@100 -- # killprocess 1168867 00:25:45.376 08:18:16 -- common/autotest_common.sh@926 -- # '[' -z 1168867 ']' 00:25:45.376 08:18:16 -- common/autotest_common.sh@930 -- # kill -0 1168867 00:25:45.376 08:18:16 -- common/autotest_common.sh@931 -- # uname 00:25:45.376 08:18:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:45.376 08:18:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1168867 00:25:45.637 08:18:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:45.637 08:18:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:45.637 08:18:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1168867' 00:25:45.637 killing process with pid 1168867 00:25:45.637 08:18:16 -- common/autotest_common.sh@945 -- # kill 1168867 00:25:45.637 08:18:16 -- common/autotest_common.sh@950 -- # wait 1168867 00:25:45.637 08:18:16 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:45.637 08:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.637 08:18:16 -- common/autotest_common.sh@10 -- # set +x 00:25:45.637 08:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.637 08:18:16 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:45.637 08:18:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.637 08:18:16 -- common/autotest_common.sh@10 -- # set +x 00:25:45.637 08:18:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.637 08:18:16 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:25:45.637 08:18:16 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:45.637 08:18:16 -- common/autotest_common.sh@1597 -- # read -r file 00:25:45.637 08:18:16 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:45.637 08:18:16 -- common/autotest_common.sh@1596 -- # sort -u 00:25:45.637 08:18:16 -- common/autotest_common.sh@1598 -- # cat 00:25:45.637 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:45.637 [2024-06-11 08:18:13.693403] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:45.637 [2024-06-11 08:18:13.693468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168867 ] 00:25:45.637 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.637 [2024-06-11 08:18:13.752968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.637 [2024-06-11 08:18:13.816769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.637 [2024-06-11 08:18:14.849996] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name c4c19e7e-c49b-4dd5-bfc3-19bfc9d48575 already exists 00:25:45.637 [2024-06-11 08:18:14.850026] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:c4c19e7e-c49b-4dd5-bfc3-19bfc9d48575 alias for bdev NVMe1n1 00:25:45.637 [2024-06-11 08:18:14.850037] bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:45.637 Running I/O for 1 seconds... 00:25:45.637 00:25:45.637 Latency(us) 00:25:45.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.638 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:45.638 NVMe0n1 : 1.00 26459.56 103.36 0.00 0.00 4826.21 3631.79 12615.68 00:25:45.638 =================================================================================================================== 00:25:45.638 Total : 26459.56 103.36 0.00 0.00 4826.21 3631.79 12615.68 00:25:45.638 Received shutdown signal, test time was about 1.000000 seconds 00:25:45.638 00:25:45.638 Latency(us) 00:25:45.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.638 =================================================================================================================== 00:25:45.638 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:45.638 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:45.638 08:18:16 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:45.638 08:18:16 -- common/autotest_common.sh@1597 -- # read -r file 00:25:45.638 08:18:16 -- host/multicontroller.sh@108 -- # nvmftestfini 00:25:45.638 08:18:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:45.638 08:18:16 -- nvmf/common.sh@116 -- # sync 00:25:45.638 08:18:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:45.638 08:18:16 -- nvmf/common.sh@119 -- # set +e 00:25:45.638 08:18:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:45.638 08:18:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:45.638 rmmod nvme_tcp 00:25:45.638 rmmod nvme_fabrics 00:25:45.898 rmmod nvme_keyring 00:25:45.898 08:18:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:45.898 08:18:16 -- nvmf/common.sh@123 -- # set -e 00:25:45.898 08:18:16 -- nvmf/common.sh@124 -- # return 0 00:25:45.898 08:18:16 -- nvmf/common.sh@477 -- # '[' -n 1168552 ']' 00:25:45.898 08:18:16 -- nvmf/common.sh@478 -- # killprocess 1168552 00:25:45.898 08:18:16 -- common/autotest_common.sh@926 -- # '[' -z 1168552 ']' 00:25:45.898 08:18:16 -- common/autotest_common.sh@930 -- # kill -0 1168552 00:25:45.898 08:18:16 -- common/autotest_common.sh@931 -- # uname 00:25:45.898 08:18:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:45.898 08:18:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1168552 00:25:45.898 08:18:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:45.898 08:18:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:45.898 08:18:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1168552' 00:25:45.898 killing process with pid 1168552 00:25:45.898 08:18:16 -- common/autotest_common.sh@945 -- # kill 1168552 00:25:45.898 08:18:16 -- common/autotest_common.sh@950 -- # wait 1168552 00:25:45.898 08:18:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:45.898 08:18:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:45.898 08:18:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:45.898 08:18:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:45.898 08:18:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:45.898 08:18:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.898 08:18:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:45.898 08:18:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.439 08:18:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:48.439 00:25:48.439 real 0m13.448s 00:25:48.439 user 0m15.851s 00:25:48.439 sys 0m6.147s 00:25:48.439 08:18:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:48.439 08:18:18 -- common/autotest_common.sh@10 -- # set +x 00:25:48.439 ************************************ 00:25:48.439 END TEST nvmf_multicontroller 00:25:48.439 ************************************ 00:25:48.439 08:18:18 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:48.439 08:18:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:48.439 08:18:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:48.439 08:18:18 -- common/autotest_common.sh@10 -- # set +x 00:25:48.439 ************************************ 00:25:48.439 START TEST nvmf_aer 00:25:48.439 ************************************ 00:25:48.440 08:18:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:48.440 * Looking for test storage... 00:25:48.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:48.440 08:18:18 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:48.440 08:18:18 -- nvmf/common.sh@7 -- # uname -s 00:25:48.440 08:18:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.440 08:18:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.440 08:18:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.440 08:18:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.440 08:18:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.440 08:18:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.440 08:18:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.440 08:18:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.440 08:18:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.440 08:18:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.440 08:18:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:48.440 08:18:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:48.440 08:18:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.440 08:18:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.440 08:18:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:48.440 08:18:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:48.440 08:18:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.440 08:18:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.440 08:18:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.440 08:18:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.440 08:18:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.440 08:18:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.440 08:18:18 -- paths/export.sh@5 -- # export PATH 00:25:48.440 08:18:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.440 08:18:18 -- nvmf/common.sh@46 -- # : 0 00:25:48.440 08:18:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:48.440 08:18:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:48.440 08:18:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:48.440 08:18:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.440 08:18:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.440 08:18:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:48.440 08:18:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:48.440 08:18:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:48.440 08:18:18 -- host/aer.sh@11 -- # nvmftestinit 00:25:48.440 08:18:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:48.440 08:18:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.440 08:18:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:48.440 08:18:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:48.440 08:18:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:48.440 08:18:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.440 08:18:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.440 08:18:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.440 08:18:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:48.440 08:18:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:48.440 08:18:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:48.440 08:18:18 -- common/autotest_common.sh@10 -- # set +x 00:25:55.029 08:18:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:55.029 08:18:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:55.029 08:18:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:55.029 08:18:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:55.030 08:18:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:55.030 08:18:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:55.030 08:18:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:55.030 08:18:25 -- nvmf/common.sh@294 -- # net_devs=() 00:25:55.030 08:18:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:55.030 08:18:25 -- nvmf/common.sh@295 -- # e810=() 00:25:55.030 08:18:25 -- nvmf/common.sh@295 -- # local -ga e810 00:25:55.030 08:18:25 -- nvmf/common.sh@296 -- # x722=() 00:25:55.030 08:18:25 -- nvmf/common.sh@296 -- # local -ga x722 00:25:55.030 08:18:25 -- nvmf/common.sh@297 -- # mlx=() 00:25:55.030 08:18:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:55.030 08:18:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:55.030 08:18:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:55.030 08:18:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:55.030 08:18:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:55.030 08:18:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:55.030 08:18:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:55.030 08:18:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:55.030 08:18:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:55.030 08:18:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:55.030 08:18:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:55.030 08:18:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:55.030 08:18:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:55.030 08:18:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:55.030 08:18:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:55.030 08:18:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:55.030 08:18:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:55.030 08:18:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:55.030 08:18:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:55.030 08:18:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:55.030 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:55.030 08:18:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:55.030 08:18:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:55.030 08:18:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.030 08:18:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.030 08:18:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:55.030 08:18:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:55.030 08:18:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:55.030 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:55.030 08:18:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:55.030 08:18:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:55.030 08:18:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.030 08:18:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.030 08:18:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:55.030 08:18:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:55.030 08:18:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:55.030 08:18:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:55.030 08:18:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:55.030 08:18:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.030 08:18:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:55.030 08:18:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.030 08:18:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:55.030 Found net devices under 0000:31:00.0: cvl_0_0 00:25:55.030 08:18:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.030 08:18:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:55.030 08:18:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.030 08:18:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:55.030 08:18:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.030 08:18:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:55.030 Found net devices under 0000:31:00.1: cvl_0_1 00:25:55.030 08:18:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.030 08:18:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:55.030 08:18:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:55.030 08:18:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:55.030 08:18:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:55.030 08:18:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:55.030 08:18:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.030 08:18:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.030 08:18:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:55.030 08:18:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:55.030 08:18:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:55.030 08:18:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:55.030 08:18:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:55.030 08:18:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:55.030 08:18:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.030 08:18:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:55.030 08:18:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:55.030 08:18:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:55.030 08:18:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:55.030 08:18:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:55.030 08:18:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:55.030 08:18:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:55.030 08:18:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:55.291 08:18:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:55.291 08:18:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:55.291 08:18:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:55.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:25:55.291 00:25:55.291 --- 10.0.0.2 ping statistics --- 00:25:55.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.291 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:25:55.291 08:18:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:55.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:25:55.291 00:25:55.291 --- 10.0.0.1 ping statistics --- 00:25:55.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.291 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:25:55.291 08:18:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.291 08:18:25 -- nvmf/common.sh@410 -- # return 0 00:25:55.291 08:18:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:55.291 08:18:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.291 08:18:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:55.291 08:18:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:55.291 08:18:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.291 08:18:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:55.292 08:18:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:55.292 08:18:25 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:55.292 08:18:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:55.292 08:18:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:55.292 08:18:25 -- common/autotest_common.sh@10 -- # set +x 00:25:55.292 08:18:25 -- nvmf/common.sh@469 -- # nvmfpid=1173624 00:25:55.292 08:18:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:55.292 08:18:25 -- nvmf/common.sh@470 -- # waitforlisten 1173624 00:25:55.292 08:18:25 -- common/autotest_common.sh@819 -- # '[' -z 1173624 ']' 00:25:55.292 08:18:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.292 08:18:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:55.292 08:18:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.292 08:18:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:55.292 08:18:25 -- common/autotest_common.sh@10 -- # set +x 00:25:55.292 [2024-06-11 08:18:25.882135] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:55.292 [2024-06-11 08:18:25.882182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.292 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.553 [2024-06-11 08:18:25.949259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:55.553 [2024-06-11 08:18:26.012685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:55.553 [2024-06-11 08:18:26.012809] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.553 [2024-06-11 08:18:26.012817] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.553 [2024-06-11 08:18:26.012824] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.553 [2024-06-11 08:18:26.012964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.553 [2024-06-11 08:18:26.013079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:55.553 [2024-06-11 08:18:26.013235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.553 [2024-06-11 08:18:26.013236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:56.124 08:18:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:56.124 08:18:26 -- common/autotest_common.sh@852 -- # return 0 00:25:56.124 08:18:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:56.124 08:18:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:56.124 08:18:26 -- common/autotest_common.sh@10 -- # set +x 00:25:56.124 08:18:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.124 08:18:26 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:56.124 08:18:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:56.124 08:18:26 -- common/autotest_common.sh@10 -- # set +x 00:25:56.124 [2024-06-11 08:18:26.690642] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.124 08:18:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.124 08:18:26 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:56.124 08:18:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:56.124 08:18:26 -- common/autotest_common.sh@10 -- # set +x 00:25:56.124 Malloc0 00:25:56.124 08:18:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.124 08:18:26 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:56.124 08:18:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:56.124 08:18:26 -- common/autotest_common.sh@10 -- # set +x 00:25:56.124 08:18:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.124 08:18:26 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:56.124 08:18:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:56.124 08:18:26 -- common/autotest_common.sh@10 -- # set +x 00:25:56.124 08:18:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.124 08:18:26 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:56.124 08:18:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:56.124 08:18:26 -- common/autotest_common.sh@10 -- # set +x 00:25:56.124 [2024-06-11 08:18:26.750078] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.124 08:18:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.124 08:18:26 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:56.124 08:18:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:56.124 08:18:26 -- common/autotest_common.sh@10 -- # set +x 00:25:56.124 [2024-06-11 08:18:26.761879] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:56.124 [ 00:25:56.124 { 00:25:56.124 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:56.124 "subtype": "Discovery", 00:25:56.124 "listen_addresses": [], 00:25:56.124 "allow_any_host": true, 00:25:56.124 "hosts": [] 00:25:56.124 }, 00:25:56.124 { 00:25:56.124 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:56.124 "subtype": "NVMe", 00:25:56.124 "listen_addresses": [ 00:25:56.124 { 00:25:56.124 "transport": "TCP", 00:25:56.124 "trtype": "TCP", 00:25:56.124 "adrfam": "IPv4", 00:25:56.124 "traddr": "10.0.0.2", 00:25:56.124 "trsvcid": "4420" 00:25:56.124 } 00:25:56.124 ], 00:25:56.124 "allow_any_host": true, 00:25:56.124 "hosts": [], 00:25:56.124 "serial_number": "SPDK00000000000001", 00:25:56.124 "model_number": "SPDK bdev Controller", 00:25:56.124 "max_namespaces": 2, 00:25:56.124 "min_cntlid": 1, 00:25:56.124 "max_cntlid": 65519, 00:25:56.124 "namespaces": [ 00:25:56.124 { 00:25:56.124 "nsid": 1, 00:25:56.124 "bdev_name": "Malloc0", 00:25:56.385 "name": "Malloc0", 00:25:56.385 "nguid": "79BFF319ED754EC2B791747F131F1B0C", 00:25:56.385 "uuid": "79bff319-ed75-4ec2-b791-747f131f1b0c" 00:25:56.385 } 00:25:56.385 ] 00:25:56.385 } 00:25:56.385 ] 00:25:56.385 08:18:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.385 08:18:26 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:56.385 08:18:26 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:56.385 08:18:26 -- host/aer.sh@33 -- # aerpid=1173685 00:25:56.385 08:18:26 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:56.385 08:18:26 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:56.385 08:18:26 -- common/autotest_common.sh@1244 -- # local i=0 00:25:56.385 08:18:26 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:56.386 08:18:26 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:25:56.386 08:18:26 -- common/autotest_common.sh@1247 -- # i=1 00:25:56.386 08:18:26 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:25:56.386 EAL: No free 2048 kB hugepages reported on node 1 00:25:56.386 08:18:26 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:56.386 08:18:26 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:25:56.386 08:18:26 -- common/autotest_common.sh@1247 -- # i=2 00:25:56.386 08:18:26 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:25:56.386 08:18:26 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:56.386 08:18:26 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:56.386 08:18:26 -- common/autotest_common.sh@1255 -- # return 0 00:25:56.386 08:18:26 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:56.386 08:18:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:56.386 08:18:26 -- common/autotest_common.sh@10 -- # set +x 00:25:56.386 Malloc1 00:25:56.386 08:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.386 08:18:27 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:56.386 08:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:56.386 08:18:27 -- common/autotest_common.sh@10 -- # set +x 00:25:56.647 08:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.647 08:18:27 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:56.647 08:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:56.647 08:18:27 -- common/autotest_common.sh@10 -- # set +x 00:25:56.647 Asynchronous Event Request test 00:25:56.647 Attaching to 10.0.0.2 00:25:56.647 Attached to 10.0.0.2 00:25:56.647 Registering asynchronous event callbacks... 00:25:56.647 Starting namespace attribute notice tests for all controllers... 00:25:56.647 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:56.647 aer_cb - Changed Namespace 00:25:56.647 Cleaning up... 00:25:56.647 [ 00:25:56.647 { 00:25:56.647 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:56.647 "subtype": "Discovery", 00:25:56.647 "listen_addresses": [], 00:25:56.647 "allow_any_host": true, 00:25:56.647 "hosts": [] 00:25:56.647 }, 00:25:56.647 { 00:25:56.647 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:56.647 "subtype": "NVMe", 00:25:56.647 "listen_addresses": [ 00:25:56.647 { 00:25:56.647 "transport": "TCP", 00:25:56.647 "trtype": "TCP", 00:25:56.647 "adrfam": "IPv4", 00:25:56.647 "traddr": "10.0.0.2", 00:25:56.647 "trsvcid": "4420" 00:25:56.647 } 00:25:56.647 ], 00:25:56.647 "allow_any_host": true, 00:25:56.647 "hosts": [], 00:25:56.647 "serial_number": "SPDK00000000000001", 00:25:56.647 "model_number": "SPDK bdev Controller", 00:25:56.647 "max_namespaces": 2, 00:25:56.647 "min_cntlid": 1, 00:25:56.647 "max_cntlid": 65519, 00:25:56.647 "namespaces": [ 00:25:56.647 { 00:25:56.647 "nsid": 1, 00:25:56.647 "bdev_name": "Malloc0", 00:25:56.647 "name": "Malloc0", 00:25:56.647 "nguid": "79BFF319ED754EC2B791747F131F1B0C", 00:25:56.647 "uuid": "79bff319-ed75-4ec2-b791-747f131f1b0c" 00:25:56.647 }, 00:25:56.647 { 00:25:56.647 "nsid": 2, 00:25:56.647 "bdev_name": "Malloc1", 00:25:56.647 "name": "Malloc1", 00:25:56.647 "nguid": "0283478DB496468E8B5AC49B062E3AD7", 00:25:56.647 "uuid": "0283478d-b496-468e-8b5a-c49b062e3ad7" 00:25:56.647 } 00:25:56.647 ] 00:25:56.647 } 00:25:56.647 ] 00:25:56.647 08:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.647 08:18:27 -- host/aer.sh@43 -- # wait 1173685 00:25:56.647 08:18:27 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:56.647 08:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:56.647 08:18:27 -- common/autotest_common.sh@10 -- # set +x 00:25:56.647 08:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.647 08:18:27 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:56.647 08:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:56.647 08:18:27 -- common/autotest_common.sh@10 -- # set +x 00:25:56.647 08:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.647 08:18:27 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:56.647 08:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:56.647 08:18:27 -- common/autotest_common.sh@10 -- # set +x 00:25:56.647 08:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.647 08:18:27 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:56.647 08:18:27 -- host/aer.sh@51 -- # nvmftestfini 00:25:56.647 08:18:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:56.647 08:18:27 -- nvmf/common.sh@116 -- # sync 00:25:56.647 08:18:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:56.647 08:18:27 -- nvmf/common.sh@119 -- # set +e 00:25:56.648 08:18:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:56.648 08:18:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:56.648 rmmod nvme_tcp 00:25:56.648 rmmod nvme_fabrics 00:25:56.648 rmmod nvme_keyring 00:25:56.648 08:18:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:56.648 08:18:27 -- nvmf/common.sh@123 -- # set -e 00:25:56.648 08:18:27 -- nvmf/common.sh@124 -- # return 0 00:25:56.648 08:18:27 -- nvmf/common.sh@477 -- # '[' -n 1173624 ']' 00:25:56.648 08:18:27 -- nvmf/common.sh@478 -- # killprocess 1173624 00:25:56.648 08:18:27 -- common/autotest_common.sh@926 -- # '[' -z 1173624 ']' 00:25:56.648 08:18:27 -- common/autotest_common.sh@930 -- # kill -0 1173624 00:25:56.648 08:18:27 -- common/autotest_common.sh@931 -- # uname 00:25:56.648 08:18:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:56.648 08:18:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1173624 00:25:56.648 08:18:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:56.648 08:18:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:56.648 08:18:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1173624' 00:25:56.648 killing process with pid 1173624 00:25:56.648 08:18:27 -- common/autotest_common.sh@945 -- # kill 1173624 00:25:56.648 [2024-06-11 08:18:27.229007] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:56.648 08:18:27 -- common/autotest_common.sh@950 -- # wait 1173624 00:25:56.909 08:18:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:56.909 08:18:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:56.909 08:18:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:56.909 08:18:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:56.909 08:18:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:56.909 08:18:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.909 08:18:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:56.909 08:18:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.821 08:18:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:58.821 00:25:58.821 real 0m10.789s 00:25:58.821 user 0m7.458s 00:25:58.821 sys 0m5.573s 00:25:58.821 08:18:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:58.821 08:18:29 -- common/autotest_common.sh@10 -- # set +x 00:25:58.821 ************************************ 00:25:58.821 END TEST nvmf_aer 00:25:58.821 ************************************ 00:25:59.082 08:18:29 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:59.082 08:18:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:59.082 08:18:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:59.082 08:18:29 -- common/autotest_common.sh@10 -- # set +x 00:25:59.082 ************************************ 00:25:59.082 START TEST nvmf_async_init 00:25:59.082 ************************************ 00:25:59.082 08:18:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:59.082 * Looking for test storage... 00:25:59.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:59.082 08:18:29 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:59.082 08:18:29 -- nvmf/common.sh@7 -- # uname -s 00:25:59.082 08:18:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.082 08:18:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.082 08:18:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.082 08:18:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.082 08:18:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.082 08:18:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.082 08:18:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.082 08:18:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.082 08:18:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.082 08:18:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.082 08:18:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:59.082 08:18:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:59.082 08:18:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.082 08:18:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.082 08:18:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:59.082 08:18:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:59.082 08:18:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.082 08:18:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.082 08:18:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.082 08:18:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.082 08:18:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.082 08:18:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.082 08:18:29 -- paths/export.sh@5 -- # export PATH 00:25:59.082 08:18:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.082 08:18:29 -- nvmf/common.sh@46 -- # : 0 00:25:59.082 08:18:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:59.082 08:18:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:59.082 08:18:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:59.082 08:18:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.082 08:18:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.082 08:18:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:59.082 08:18:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:59.082 08:18:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:59.082 08:18:29 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:59.082 08:18:29 -- host/async_init.sh@14 -- # null_block_size=512 00:25:59.082 08:18:29 -- host/async_init.sh@15 -- # null_bdev=null0 00:25:59.082 08:18:29 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:59.082 08:18:29 -- host/async_init.sh@20 -- # uuidgen 00:25:59.082 08:18:29 -- host/async_init.sh@20 -- # tr -d - 00:25:59.082 08:18:29 -- host/async_init.sh@20 -- # nguid=7ea665e3901048109413a298ed8f31b5 00:25:59.082 08:18:29 -- host/async_init.sh@22 -- # nvmftestinit 00:25:59.082 08:18:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:59.083 08:18:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.083 08:18:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:59.083 08:18:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:59.083 08:18:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:59.083 08:18:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.083 08:18:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:59.083 08:18:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.083 08:18:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:59.083 08:18:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:59.083 08:18:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:59.083 08:18:29 -- common/autotest_common.sh@10 -- # set +x 00:26:07.311 08:18:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:07.311 08:18:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:07.311 08:18:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:07.311 08:18:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:07.311 08:18:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:07.311 08:18:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:07.311 08:18:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:07.311 08:18:36 -- nvmf/common.sh@294 -- # net_devs=() 00:26:07.311 08:18:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:07.311 08:18:36 -- nvmf/common.sh@295 -- # e810=() 00:26:07.311 08:18:36 -- nvmf/common.sh@295 -- # local -ga e810 00:26:07.311 08:18:36 -- nvmf/common.sh@296 -- # x722=() 00:26:07.311 08:18:36 -- nvmf/common.sh@296 -- # local -ga x722 00:26:07.311 08:18:36 -- nvmf/common.sh@297 -- # mlx=() 00:26:07.311 08:18:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:07.311 08:18:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:07.311 08:18:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:07.311 08:18:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:07.311 08:18:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:07.311 08:18:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:07.311 08:18:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:07.311 08:18:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:07.311 08:18:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:07.311 08:18:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:07.311 08:18:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:07.311 08:18:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:07.311 08:18:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:07.311 08:18:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:07.311 08:18:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:07.311 08:18:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:07.311 08:18:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:07.311 08:18:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:07.311 08:18:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:07.311 08:18:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:07.311 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:07.311 08:18:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:07.311 08:18:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:07.311 08:18:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.311 08:18:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.311 08:18:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:07.311 08:18:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:07.311 08:18:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:07.312 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:07.312 08:18:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:07.312 08:18:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:07.312 08:18:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.312 08:18:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.312 08:18:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:07.312 08:18:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:07.312 08:18:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:07.312 08:18:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:07.312 08:18:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:07.312 08:18:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.312 08:18:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:07.312 08:18:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.312 08:18:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:07.312 Found net devices under 0000:31:00.0: cvl_0_0 00:26:07.312 08:18:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.312 08:18:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:07.312 08:18:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.312 08:18:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:07.312 08:18:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.312 08:18:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:07.312 Found net devices under 0000:31:00.1: cvl_0_1 00:26:07.312 08:18:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.312 08:18:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:07.312 08:18:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:07.312 08:18:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:07.312 08:18:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:07.312 08:18:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:07.312 08:18:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:07.312 08:18:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:07.312 08:18:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:07.312 08:18:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:07.312 08:18:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:07.312 08:18:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:07.312 08:18:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:07.312 08:18:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:07.312 08:18:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:07.312 08:18:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:07.312 08:18:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:07.312 08:18:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:07.312 08:18:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:07.312 08:18:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:07.312 08:18:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:07.312 08:18:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:07.312 08:18:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:07.312 08:18:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:07.312 08:18:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:07.312 08:18:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:07.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:07.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:26:07.312 00:26:07.312 --- 10.0.0.2 ping statistics --- 00:26:07.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.312 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:26:07.312 08:18:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:07.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:07.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:26:07.312 00:26:07.312 --- 10.0.0.1 ping statistics --- 00:26:07.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.312 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:26:07.312 08:18:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:07.312 08:18:36 -- nvmf/common.sh@410 -- # return 0 00:26:07.312 08:18:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:07.312 08:18:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:07.312 08:18:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:07.312 08:18:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:07.312 08:18:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:07.312 08:18:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:07.312 08:18:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:07.312 08:18:36 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:07.312 08:18:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:07.312 08:18:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:07.312 08:18:36 -- common/autotest_common.sh@10 -- # set +x 00:26:07.312 08:18:36 -- nvmf/common.sh@469 -- # nvmfpid=1178068 00:26:07.312 08:18:36 -- nvmf/common.sh@470 -- # waitforlisten 1178068 00:26:07.312 08:18:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:07.312 08:18:36 -- common/autotest_common.sh@819 -- # '[' -z 1178068 ']' 00:26:07.312 08:18:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.312 08:18:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:07.312 08:18:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.312 08:18:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:07.312 08:18:36 -- common/autotest_common.sh@10 -- # set +x 00:26:07.312 [2024-06-11 08:18:36.918473] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:07.312 [2024-06-11 08:18:36.918536] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:07.312 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.312 [2024-06-11 08:18:36.991379] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.312 [2024-06-11 08:18:37.063171] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:07.312 [2024-06-11 08:18:37.063295] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:07.312 [2024-06-11 08:18:37.063303] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:07.312 [2024-06-11 08:18:37.063310] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:07.312 [2024-06-11 08:18:37.063329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.312 08:18:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:07.312 08:18:37 -- common/autotest_common.sh@852 -- # return 0 00:26:07.312 08:18:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:07.312 08:18:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:07.312 08:18:37 -- common/autotest_common.sh@10 -- # set +x 00:26:07.312 08:18:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:07.312 08:18:37 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:07.312 08:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.312 08:18:37 -- common/autotest_common.sh@10 -- # set +x 00:26:07.312 [2024-06-11 08:18:37.722444] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.312 08:18:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.312 08:18:37 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:07.312 08:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.312 08:18:37 -- common/autotest_common.sh@10 -- # set +x 00:26:07.312 null0 00:26:07.312 08:18:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.312 08:18:37 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:07.312 08:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.312 08:18:37 -- common/autotest_common.sh@10 -- # set +x 00:26:07.312 08:18:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.312 08:18:37 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:07.312 08:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.312 08:18:37 -- common/autotest_common.sh@10 -- # set +x 00:26:07.312 08:18:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.312 08:18:37 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7ea665e3901048109413a298ed8f31b5 00:26:07.312 08:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.312 08:18:37 -- common/autotest_common.sh@10 -- # set +x 00:26:07.312 08:18:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.312 08:18:37 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:07.313 08:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.313 08:18:37 -- common/autotest_common.sh@10 -- # set +x 00:26:07.313 [2024-06-11 08:18:37.762680] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:07.313 08:18:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.313 08:18:37 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:07.313 08:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.313 08:18:37 -- common/autotest_common.sh@10 -- # set +x 00:26:07.577 nvme0n1 00:26:07.577 08:18:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.577 08:18:37 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:07.577 08:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.577 08:18:37 -- common/autotest_common.sh@10 -- # set +x 00:26:07.577 [ 00:26:07.577 { 00:26:07.577 "name": "nvme0n1", 00:26:07.577 "aliases": [ 00:26:07.577 "7ea665e3-9010-4810-9413-a298ed8f31b5" 00:26:07.577 ], 00:26:07.577 "product_name": "NVMe disk", 00:26:07.577 "block_size": 512, 00:26:07.577 "num_blocks": 2097152, 00:26:07.577 "uuid": "7ea665e3-9010-4810-9413-a298ed8f31b5", 00:26:07.577 "assigned_rate_limits": { 00:26:07.577 "rw_ios_per_sec": 0, 00:26:07.577 "rw_mbytes_per_sec": 0, 00:26:07.577 "r_mbytes_per_sec": 0, 00:26:07.577 "w_mbytes_per_sec": 0 00:26:07.577 }, 00:26:07.577 "claimed": false, 00:26:07.577 "zoned": false, 00:26:07.577 "supported_io_types": { 00:26:07.577 "read": true, 00:26:07.577 "write": true, 00:26:07.577 "unmap": false, 00:26:07.577 "write_zeroes": true, 00:26:07.577 "flush": true, 00:26:07.577 "reset": true, 00:26:07.577 "compare": true, 00:26:07.577 "compare_and_write": true, 00:26:07.577 "abort": true, 00:26:07.577 "nvme_admin": true, 00:26:07.577 "nvme_io": true 00:26:07.577 }, 00:26:07.577 "driver_specific": { 00:26:07.577 "nvme": [ 00:26:07.577 { 00:26:07.577 "trid": { 00:26:07.577 "trtype": "TCP", 00:26:07.577 "adrfam": "IPv4", 00:26:07.577 "traddr": "10.0.0.2", 00:26:07.577 "trsvcid": "4420", 00:26:07.577 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:07.577 }, 00:26:07.577 "ctrlr_data": { 00:26:07.577 "cntlid": 1, 00:26:07.577 "vendor_id": "0x8086", 00:26:07.577 "model_number": "SPDK bdev Controller", 00:26:07.577 "serial_number": "00000000000000000000", 00:26:07.577 "firmware_revision": "24.01.1", 00:26:07.577 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:07.577 "oacs": { 00:26:07.577 "security": 0, 00:26:07.577 "format": 0, 00:26:07.577 "firmware": 0, 00:26:07.577 "ns_manage": 0 00:26:07.577 }, 00:26:07.577 "multi_ctrlr": true, 00:26:07.577 "ana_reporting": false 00:26:07.577 }, 00:26:07.577 "vs": { 00:26:07.577 "nvme_version": "1.3" 00:26:07.577 }, 00:26:07.577 "ns_data": { 00:26:07.577 "id": 1, 00:26:07.577 "can_share": true 00:26:07.577 } 00:26:07.577 } 00:26:07.577 ], 00:26:07.577 "mp_policy": "active_passive" 00:26:07.577 } 00:26:07.577 } 00:26:07.577 ] 00:26:07.577 08:18:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.577 08:18:38 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:07.577 08:18:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.577 08:18:38 -- common/autotest_common.sh@10 -- # set +x 00:26:07.577 [2024-06-11 08:18:38.015186] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:07.577 [2024-06-11 08:18:38.015246] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c75a90 (9): Bad file descriptor 00:26:07.577 [2024-06-11 08:18:38.158544] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:07.577 08:18:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.577 08:18:38 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:07.577 08:18:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.577 08:18:38 -- common/autotest_common.sh@10 -- # set +x 00:26:07.577 [ 00:26:07.577 { 00:26:07.577 "name": "nvme0n1", 00:26:07.577 "aliases": [ 00:26:07.577 "7ea665e3-9010-4810-9413-a298ed8f31b5" 00:26:07.577 ], 00:26:07.577 "product_name": "NVMe disk", 00:26:07.577 "block_size": 512, 00:26:07.577 "num_blocks": 2097152, 00:26:07.577 "uuid": "7ea665e3-9010-4810-9413-a298ed8f31b5", 00:26:07.577 "assigned_rate_limits": { 00:26:07.577 "rw_ios_per_sec": 0, 00:26:07.577 "rw_mbytes_per_sec": 0, 00:26:07.577 "r_mbytes_per_sec": 0, 00:26:07.577 "w_mbytes_per_sec": 0 00:26:07.577 }, 00:26:07.577 "claimed": false, 00:26:07.577 "zoned": false, 00:26:07.577 "supported_io_types": { 00:26:07.577 "read": true, 00:26:07.577 "write": true, 00:26:07.577 "unmap": false, 00:26:07.577 "write_zeroes": true, 00:26:07.577 "flush": true, 00:26:07.577 "reset": true, 00:26:07.577 "compare": true, 00:26:07.577 "compare_and_write": true, 00:26:07.577 "abort": true, 00:26:07.577 "nvme_admin": true, 00:26:07.577 "nvme_io": true 00:26:07.577 }, 00:26:07.577 "driver_specific": { 00:26:07.577 "nvme": [ 00:26:07.577 { 00:26:07.577 "trid": { 00:26:07.577 "trtype": "TCP", 00:26:07.577 "adrfam": "IPv4", 00:26:07.577 "traddr": "10.0.0.2", 00:26:07.577 "trsvcid": "4420", 00:26:07.577 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:07.577 }, 00:26:07.577 "ctrlr_data": { 00:26:07.577 "cntlid": 2, 00:26:07.577 "vendor_id": "0x8086", 00:26:07.577 "model_number": "SPDK bdev Controller", 00:26:07.577 "serial_number": "00000000000000000000", 00:26:07.577 "firmware_revision": "24.01.1", 00:26:07.577 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:07.577 "oacs": { 00:26:07.577 "security": 0, 00:26:07.577 "format": 0, 00:26:07.577 "firmware": 0, 00:26:07.577 "ns_manage": 0 00:26:07.577 }, 00:26:07.577 "multi_ctrlr": true, 00:26:07.577 "ana_reporting": false 00:26:07.577 }, 00:26:07.577 "vs": { 00:26:07.577 "nvme_version": "1.3" 00:26:07.577 }, 00:26:07.577 "ns_data": { 00:26:07.577 "id": 1, 00:26:07.577 "can_share": true 00:26:07.577 } 00:26:07.577 } 00:26:07.577 ], 00:26:07.577 "mp_policy": "active_passive" 00:26:07.577 } 00:26:07.577 } 00:26:07.577 ] 00:26:07.577 08:18:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.577 08:18:38 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.577 08:18:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.577 08:18:38 -- common/autotest_common.sh@10 -- # set +x 00:26:07.577 08:18:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.577 08:18:38 -- host/async_init.sh@53 -- # mktemp 00:26:07.577 08:18:38 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.8z786BagDz 00:26:07.577 08:18:38 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:07.577 08:18:38 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.8z786BagDz 00:26:07.577 08:18:38 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:07.577 08:18:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.577 08:18:38 -- common/autotest_common.sh@10 -- # set +x 00:26:07.577 08:18:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.577 08:18:38 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:26:07.577 08:18:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.577 08:18:38 -- common/autotest_common.sh@10 -- # set +x 00:26:07.577 [2024-06-11 08:18:38.215813] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:07.577 [2024-06-11 08:18:38.215937] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:07.577 08:18:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.577 08:18:38 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8z786BagDz 00:26:07.577 08:18:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.577 08:18:38 -- common/autotest_common.sh@10 -- # set +x 00:26:07.838 08:18:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.838 08:18:38 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8z786BagDz 00:26:07.838 08:18:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.838 08:18:38 -- common/autotest_common.sh@10 -- # set +x 00:26:07.838 [2024-06-11 08:18:38.231854] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:07.838 nvme0n1 00:26:07.838 08:18:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.838 08:18:38 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:07.838 08:18:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.838 08:18:38 -- common/autotest_common.sh@10 -- # set +x 00:26:07.838 [ 00:26:07.838 { 00:26:07.838 "name": "nvme0n1", 00:26:07.838 "aliases": [ 00:26:07.838 "7ea665e3-9010-4810-9413-a298ed8f31b5" 00:26:07.838 ], 00:26:07.838 "product_name": "NVMe disk", 00:26:07.838 "block_size": 512, 00:26:07.838 "num_blocks": 2097152, 00:26:07.838 "uuid": "7ea665e3-9010-4810-9413-a298ed8f31b5", 00:26:07.838 "assigned_rate_limits": { 00:26:07.838 "rw_ios_per_sec": 0, 00:26:07.838 "rw_mbytes_per_sec": 0, 00:26:07.838 "r_mbytes_per_sec": 0, 00:26:07.838 "w_mbytes_per_sec": 0 00:26:07.838 }, 00:26:07.838 "claimed": false, 00:26:07.838 "zoned": false, 00:26:07.838 "supported_io_types": { 00:26:07.838 "read": true, 00:26:07.838 "write": true, 00:26:07.838 "unmap": false, 00:26:07.838 "write_zeroes": true, 00:26:07.838 "flush": true, 00:26:07.838 "reset": true, 00:26:07.838 "compare": true, 00:26:07.838 "compare_and_write": true, 00:26:07.838 "abort": true, 00:26:07.838 "nvme_admin": true, 00:26:07.838 "nvme_io": true 00:26:07.838 }, 00:26:07.838 "driver_specific": { 00:26:07.838 "nvme": [ 00:26:07.838 { 00:26:07.838 "trid": { 00:26:07.838 "trtype": "TCP", 00:26:07.838 "adrfam": "IPv4", 00:26:07.838 "traddr": "10.0.0.2", 00:26:07.838 "trsvcid": "4421", 00:26:07.838 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:07.838 }, 00:26:07.838 "ctrlr_data": { 00:26:07.838 "cntlid": 3, 00:26:07.838 "vendor_id": "0x8086", 00:26:07.838 "model_number": "SPDK bdev Controller", 00:26:07.838 "serial_number": "00000000000000000000", 00:26:07.838 "firmware_revision": "24.01.1", 00:26:07.838 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:07.838 "oacs": { 00:26:07.838 "security": 0, 00:26:07.838 "format": 0, 00:26:07.838 "firmware": 0, 00:26:07.838 "ns_manage": 0 00:26:07.838 }, 00:26:07.838 "multi_ctrlr": true, 00:26:07.838 "ana_reporting": false 00:26:07.838 }, 00:26:07.838 "vs": { 00:26:07.838 "nvme_version": "1.3" 00:26:07.838 }, 00:26:07.838 "ns_data": { 00:26:07.838 "id": 1, 00:26:07.838 "can_share": true 00:26:07.838 } 00:26:07.838 } 00:26:07.838 ], 00:26:07.838 "mp_policy": "active_passive" 00:26:07.838 } 00:26:07.838 } 00:26:07.838 ] 00:26:07.838 08:18:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.838 08:18:38 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.838 08:18:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:07.838 08:18:38 -- common/autotest_common.sh@10 -- # set +x 00:26:07.839 08:18:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:07.839 08:18:38 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.8z786BagDz 00:26:07.839 08:18:38 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:26:07.839 08:18:38 -- host/async_init.sh@78 -- # nvmftestfini 00:26:07.839 08:18:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:07.839 08:18:38 -- nvmf/common.sh@116 -- # sync 00:26:07.839 08:18:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:07.839 08:18:38 -- nvmf/common.sh@119 -- # set +e 00:26:07.839 08:18:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:07.839 08:18:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:07.839 rmmod nvme_tcp 00:26:07.839 rmmod nvme_fabrics 00:26:07.839 rmmod nvme_keyring 00:26:07.839 08:18:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:07.839 08:18:38 -- nvmf/common.sh@123 -- # set -e 00:26:07.839 08:18:38 -- nvmf/common.sh@124 -- # return 0 00:26:07.839 08:18:38 -- nvmf/common.sh@477 -- # '[' -n 1178068 ']' 00:26:07.839 08:18:38 -- nvmf/common.sh@478 -- # killprocess 1178068 00:26:07.839 08:18:38 -- common/autotest_common.sh@926 -- # '[' -z 1178068 ']' 00:26:07.839 08:18:38 -- common/autotest_common.sh@930 -- # kill -0 1178068 00:26:07.839 08:18:38 -- common/autotest_common.sh@931 -- # uname 00:26:07.839 08:18:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:07.839 08:18:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1178068 00:26:07.839 08:18:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:07.839 08:18:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:07.839 08:18:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1178068' 00:26:07.839 killing process with pid 1178068 00:26:07.839 08:18:38 -- common/autotest_common.sh@945 -- # kill 1178068 00:26:07.839 08:18:38 -- common/autotest_common.sh@950 -- # wait 1178068 00:26:08.100 08:18:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:08.100 08:18:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:08.100 08:18:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:08.100 08:18:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:08.100 08:18:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:08.100 08:18:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.100 08:18:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:08.100 08:18:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.645 08:18:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:10.645 00:26:10.645 real 0m11.186s 00:26:10.645 user 0m3.935s 00:26:10.645 sys 0m5.687s 00:26:10.645 08:18:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:10.645 08:18:40 -- common/autotest_common.sh@10 -- # set +x 00:26:10.645 ************************************ 00:26:10.645 END TEST nvmf_async_init 00:26:10.645 ************************************ 00:26:10.645 08:18:40 -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:10.645 08:18:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:10.645 08:18:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:10.645 08:18:40 -- common/autotest_common.sh@10 -- # set +x 00:26:10.645 ************************************ 00:26:10.645 START TEST dma 00:26:10.645 ************************************ 00:26:10.645 08:18:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:10.645 * Looking for test storage... 00:26:10.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:10.645 08:18:40 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:10.645 08:18:40 -- nvmf/common.sh@7 -- # uname -s 00:26:10.645 08:18:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:10.645 08:18:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:10.646 08:18:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:10.646 08:18:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:10.646 08:18:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:10.646 08:18:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:10.646 08:18:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:10.646 08:18:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:10.646 08:18:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:10.646 08:18:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:10.646 08:18:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:10.646 08:18:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:10.646 08:18:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:10.646 08:18:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:10.646 08:18:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:10.646 08:18:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:10.646 08:18:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.646 08:18:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.646 08:18:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.646 08:18:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.646 08:18:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.646 08:18:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.646 08:18:40 -- paths/export.sh@5 -- # export PATH 00:26:10.646 08:18:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.646 08:18:40 -- nvmf/common.sh@46 -- # : 0 00:26:10.646 08:18:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:10.646 08:18:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:10.646 08:18:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:10.646 08:18:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:10.646 08:18:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:10.646 08:18:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:10.646 08:18:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:10.646 08:18:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:10.646 08:18:40 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:10.646 08:18:40 -- host/dma.sh@13 -- # exit 0 00:26:10.646 00:26:10.646 real 0m0.120s 00:26:10.646 user 0m0.050s 00:26:10.646 sys 0m0.077s 00:26:10.646 08:18:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:10.646 08:18:40 -- common/autotest_common.sh@10 -- # set +x 00:26:10.646 ************************************ 00:26:10.646 END TEST dma 00:26:10.646 ************************************ 00:26:10.646 08:18:40 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:10.646 08:18:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:10.646 08:18:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:10.646 08:18:40 -- common/autotest_common.sh@10 -- # set +x 00:26:10.646 ************************************ 00:26:10.646 START TEST nvmf_identify 00:26:10.646 ************************************ 00:26:10.646 08:18:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:10.646 * Looking for test storage... 00:26:10.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:10.646 08:18:40 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:10.646 08:18:40 -- nvmf/common.sh@7 -- # uname -s 00:26:10.646 08:18:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:10.646 08:18:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:10.646 08:18:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:10.646 08:18:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:10.646 08:18:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:10.646 08:18:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:10.646 08:18:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:10.646 08:18:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:10.646 08:18:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:10.646 08:18:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:10.646 08:18:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:10.646 08:18:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:10.646 08:18:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:10.646 08:18:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:10.646 08:18:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:10.646 08:18:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:10.646 08:18:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.646 08:18:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.646 08:18:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.646 08:18:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.646 08:18:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.646 08:18:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.646 08:18:40 -- paths/export.sh@5 -- # export PATH 00:26:10.647 08:18:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.647 08:18:41 -- nvmf/common.sh@46 -- # : 0 00:26:10.647 08:18:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:10.647 08:18:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:10.647 08:18:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:10.647 08:18:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:10.647 08:18:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:10.647 08:18:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:10.647 08:18:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:10.647 08:18:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:10.647 08:18:41 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:10.647 08:18:41 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:10.647 08:18:41 -- host/identify.sh@14 -- # nvmftestinit 00:26:10.647 08:18:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:10.647 08:18:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:10.647 08:18:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:10.647 08:18:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:10.647 08:18:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:10.647 08:18:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.647 08:18:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:10.647 08:18:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.647 08:18:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:10.647 08:18:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:10.647 08:18:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:10.647 08:18:41 -- common/autotest_common.sh@10 -- # set +x 00:26:17.236 08:18:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:17.236 08:18:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:17.236 08:18:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:17.236 08:18:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:17.236 08:18:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:17.236 08:18:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:17.236 08:18:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:17.236 08:18:47 -- nvmf/common.sh@294 -- # net_devs=() 00:26:17.236 08:18:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:17.236 08:18:47 -- nvmf/common.sh@295 -- # e810=() 00:26:17.236 08:18:47 -- nvmf/common.sh@295 -- # local -ga e810 00:26:17.236 08:18:47 -- nvmf/common.sh@296 -- # x722=() 00:26:17.236 08:18:47 -- nvmf/common.sh@296 -- # local -ga x722 00:26:17.236 08:18:47 -- nvmf/common.sh@297 -- # mlx=() 00:26:17.236 08:18:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:17.236 08:18:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:17.236 08:18:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:17.236 08:18:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:17.236 08:18:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:17.236 08:18:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:17.236 08:18:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:17.236 08:18:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:17.236 08:18:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:17.236 08:18:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:17.236 08:18:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:17.236 08:18:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:17.236 08:18:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:17.236 08:18:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:17.236 08:18:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:17.236 08:18:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:17.236 08:18:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:17.236 08:18:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:17.236 08:18:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:17.236 08:18:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:17.236 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:17.236 08:18:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:17.236 08:18:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:17.236 08:18:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.236 08:18:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.236 08:18:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:17.236 08:18:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:17.236 08:18:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:17.236 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:17.236 08:18:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:17.236 08:18:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:17.236 08:18:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.236 08:18:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.236 08:18:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:17.236 08:18:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:17.236 08:18:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:17.236 08:18:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:17.236 08:18:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:17.236 08:18:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.236 08:18:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:17.236 08:18:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.236 08:18:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:17.236 Found net devices under 0000:31:00.0: cvl_0_0 00:26:17.236 08:18:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.236 08:18:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:17.236 08:18:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.236 08:18:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:17.236 08:18:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.236 08:18:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:17.236 Found net devices under 0000:31:00.1: cvl_0_1 00:26:17.236 08:18:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.236 08:18:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:17.236 08:18:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:17.236 08:18:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:17.236 08:18:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:17.236 08:18:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:17.236 08:18:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:17.236 08:18:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:17.236 08:18:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:17.236 08:18:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:17.236 08:18:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:17.236 08:18:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:17.236 08:18:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:17.236 08:18:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:17.236 08:18:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:17.236 08:18:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:17.236 08:18:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:17.236 08:18:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:17.236 08:18:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:17.236 08:18:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:17.236 08:18:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:17.236 08:18:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:17.236 08:18:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:17.236 08:18:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:17.236 08:18:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:17.236 08:18:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:17.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:17.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.485 ms 00:26:17.236 00:26:17.236 --- 10.0.0.2 ping statistics --- 00:26:17.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.236 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:26:17.236 08:18:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:17.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:17.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:26:17.236 00:26:17.236 --- 10.0.0.1 ping statistics --- 00:26:17.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.236 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:26:17.236 08:18:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:17.236 08:18:47 -- nvmf/common.sh@410 -- # return 0 00:26:17.236 08:18:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:17.236 08:18:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:17.236 08:18:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:17.236 08:18:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:17.236 08:18:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:17.236 08:18:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:17.236 08:18:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:17.236 08:18:47 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:17.236 08:18:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:17.236 08:18:47 -- common/autotest_common.sh@10 -- # set +x 00:26:17.236 08:18:47 -- host/identify.sh@19 -- # nvmfpid=1182534 00:26:17.236 08:18:47 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:17.236 08:18:47 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:17.236 08:18:47 -- host/identify.sh@23 -- # waitforlisten 1182534 00:26:17.236 08:18:47 -- common/autotest_common.sh@819 -- # '[' -z 1182534 ']' 00:26:17.236 08:18:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.236 08:18:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:17.236 08:18:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.236 08:18:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:17.236 08:18:47 -- common/autotest_common.sh@10 -- # set +x 00:26:17.236 [2024-06-11 08:18:47.785352] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:17.236 [2024-06-11 08:18:47.785413] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:17.236 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.236 [2024-06-11 08:18:47.857696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:17.497 [2024-06-11 08:18:47.931975] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:17.497 [2024-06-11 08:18:47.932112] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.497 [2024-06-11 08:18:47.932123] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.497 [2024-06-11 08:18:47.932132] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.497 [2024-06-11 08:18:47.932280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.497 [2024-06-11 08:18:47.932413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:17.497 [2024-06-11 08:18:47.932530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:17.497 [2024-06-11 08:18:47.932718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.068 08:18:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:18.068 08:18:48 -- common/autotest_common.sh@852 -- # return 0 00:26:18.068 08:18:48 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:18.068 08:18:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.068 08:18:48 -- common/autotest_common.sh@10 -- # set +x 00:26:18.068 [2024-06-11 08:18:48.565504] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.068 08:18:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.068 08:18:48 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:18.068 08:18:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:18.068 08:18:48 -- common/autotest_common.sh@10 -- # set +x 00:26:18.068 08:18:48 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:18.068 08:18:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.068 08:18:48 -- common/autotest_common.sh@10 -- # set +x 00:26:18.068 Malloc0 00:26:18.068 08:18:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.068 08:18:48 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:18.068 08:18:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.068 08:18:48 -- common/autotest_common.sh@10 -- # set +x 00:26:18.069 08:18:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.069 08:18:48 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:18.069 08:18:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.069 08:18:48 -- common/autotest_common.sh@10 -- # set +x 00:26:18.069 08:18:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.069 08:18:48 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:18.069 08:18:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.069 08:18:48 -- common/autotest_common.sh@10 -- # set +x 00:26:18.069 [2024-06-11 08:18:48.662635] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.069 08:18:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.069 08:18:48 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:18.069 08:18:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.069 08:18:48 -- common/autotest_common.sh@10 -- # set +x 00:26:18.069 08:18:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.069 08:18:48 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:18.069 08:18:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.069 08:18:48 -- common/autotest_common.sh@10 -- # set +x 00:26:18.069 [2024-06-11 08:18:48.682487] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:18.069 [ 00:26:18.069 { 00:26:18.069 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:18.069 "subtype": "Discovery", 00:26:18.069 "listen_addresses": [ 00:26:18.069 { 00:26:18.069 "transport": "TCP", 00:26:18.069 "trtype": "TCP", 00:26:18.069 "adrfam": "IPv4", 00:26:18.069 "traddr": "10.0.0.2", 00:26:18.069 "trsvcid": "4420" 00:26:18.069 } 00:26:18.069 ], 00:26:18.069 "allow_any_host": true, 00:26:18.069 "hosts": [] 00:26:18.069 }, 00:26:18.069 { 00:26:18.069 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:18.069 "subtype": "NVMe", 00:26:18.069 "listen_addresses": [ 00:26:18.069 { 00:26:18.069 "transport": "TCP", 00:26:18.069 "trtype": "TCP", 00:26:18.069 "adrfam": "IPv4", 00:26:18.069 "traddr": "10.0.0.2", 00:26:18.069 "trsvcid": "4420" 00:26:18.069 } 00:26:18.069 ], 00:26:18.069 "allow_any_host": true, 00:26:18.069 "hosts": [], 00:26:18.069 "serial_number": "SPDK00000000000001", 00:26:18.069 "model_number": "SPDK bdev Controller", 00:26:18.069 "max_namespaces": 32, 00:26:18.069 "min_cntlid": 1, 00:26:18.069 "max_cntlid": 65519, 00:26:18.069 "namespaces": [ 00:26:18.069 { 00:26:18.069 "nsid": 1, 00:26:18.069 "bdev_name": "Malloc0", 00:26:18.069 "name": "Malloc0", 00:26:18.069 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:18.069 "eui64": "ABCDEF0123456789", 00:26:18.069 "uuid": "1bc5e50b-1a70-469c-aa1d-759859fd5e6c" 00:26:18.069 } 00:26:18.069 ] 00:26:18.069 } 00:26:18.069 ] 00:26:18.069 08:18:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.069 08:18:48 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:18.332 [2024-06-11 08:18:48.719333] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:18.332 [2024-06-11 08:18:48.719400] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1182750 ] 00:26:18.332 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.332 [2024-06-11 08:18:48.753122] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:26:18.332 [2024-06-11 08:18:48.753173] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:18.332 [2024-06-11 08:18:48.753179] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:18.332 [2024-06-11 08:18:48.753189] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:18.332 [2024-06-11 08:18:48.753197] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:18.332 [2024-06-11 08:18:48.756466] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:26:18.332 [2024-06-11 08:18:48.756498] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x21929e0 0 00:26:18.332 [2024-06-11 08:18:48.764447] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:18.332 [2024-06-11 08:18:48.764458] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:18.332 [2024-06-11 08:18:48.764462] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:18.332 [2024-06-11 08:18:48.764465] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:18.332 [2024-06-11 08:18:48.764500] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.332 [2024-06-11 08:18:48.764507] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.332 [2024-06-11 08:18:48.764511] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21929e0) 00:26:18.332 [2024-06-11 08:18:48.764524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:18.332 [2024-06-11 08:18:48.764540] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fa730, cid 0, qid 0 00:26:18.332 [2024-06-11 08:18:48.772450] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.332 [2024-06-11 08:18:48.772460] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.332 [2024-06-11 08:18:48.772464] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.332 [2024-06-11 08:18:48.772468] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fa730) on tqpair=0x21929e0 00:26:18.332 [2024-06-11 08:18:48.772481] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:18.332 [2024-06-11 08:18:48.772488] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:26:18.332 [2024-06-11 08:18:48.772493] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:26:18.332 [2024-06-11 08:18:48.772507] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.332 [2024-06-11 08:18:48.772514] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.332 [2024-06-11 08:18:48.772518] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21929e0) 00:26:18.332 [2024-06-11 08:18:48.772525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-06-11 08:18:48.772538] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fa730, cid 0, qid 0 00:26:18.332 [2024-06-11 08:18:48.772723] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.332 [2024-06-11 08:18:48.772730] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.332 [2024-06-11 08:18:48.772733] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.332 [2024-06-11 08:18:48.772737] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fa730) on tqpair=0x21929e0 00:26:18.332 [2024-06-11 08:18:48.772745] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:26:18.332 [2024-06-11 08:18:48.772752] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:26:18.332 [2024-06-11 08:18:48.772759] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.332 [2024-06-11 08:18:48.772763] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.332 [2024-06-11 08:18:48.772766] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21929e0) 00:26:18.332 [2024-06-11 08:18:48.772773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-06-11 08:18:48.772783] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fa730, cid 0, qid 0 00:26:18.332 [2024-06-11 08:18:48.772992] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.332 [2024-06-11 08:18:48.772999] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.332 [2024-06-11 08:18:48.773002] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.332 [2024-06-11 08:18:48.773006] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fa730) on tqpair=0x21929e0 00:26:18.332 [2024-06-11 08:18:48.773012] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:26:18.332 [2024-06-11 08:18:48.773020] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:26:18.332 [2024-06-11 08:18:48.773026] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.332 [2024-06-11 08:18:48.773030] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.332 [2024-06-11 08:18:48.773034] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21929e0) 00:26:18.332 [2024-06-11 08:18:48.773040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-06-11 08:18:48.773050] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fa730, cid 0, qid 0 00:26:18.332 [2024-06-11 08:18:48.773128] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.332 [2024-06-11 08:18:48.773134] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.332 [2024-06-11 08:18:48.773138] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.332 [2024-06-11 08:18:48.773141] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fa730) on tqpair=0x21929e0 00:26:18.332 [2024-06-11 08:18:48.773147] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:18.332 [2024-06-11 08:18:48.773156] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.332 [2024-06-11 08:18:48.773160] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.332 [2024-06-11 08:18:48.773163] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21929e0) 00:26:18.332 [2024-06-11 08:18:48.773172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-06-11 08:18:48.773182] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fa730, cid 0, qid 0 00:26:18.332 [2024-06-11 08:18:48.773397] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.332 [2024-06-11 08:18:48.773403] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.332 [2024-06-11 08:18:48.773407] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.332 [2024-06-11 08:18:48.773410] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fa730) on tqpair=0x21929e0 00:26:18.332 [2024-06-11 08:18:48.773415] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:26:18.332 [2024-06-11 08:18:48.773420] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:26:18.332 [2024-06-11 08:18:48.773427] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:18.332 [2024-06-11 08:18:48.773533] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:26:18.332 [2024-06-11 08:18:48.773539] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:18.332 [2024-06-11 08:18:48.773547] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.332 [2024-06-11 08:18:48.773551] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.332 [2024-06-11 08:18:48.773554] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21929e0) 00:26:18.332 [2024-06-11 08:18:48.773561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-06-11 08:18:48.773571] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fa730, cid 0, qid 0 00:26:18.332 [2024-06-11 08:18:48.773758] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.332 [2024-06-11 08:18:48.773764] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.332 [2024-06-11 08:18:48.773768] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.332 [2024-06-11 08:18:48.773771] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fa730) on tqpair=0x21929e0 00:26:18.332 [2024-06-11 08:18:48.773777] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:18.332 [2024-06-11 08:18:48.773785] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.332 [2024-06-11 08:18:48.773789] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.773793] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21929e0) 00:26:18.333 [2024-06-11 08:18:48.773799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-06-11 08:18:48.773809] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fa730, cid 0, qid 0 00:26:18.333 [2024-06-11 08:18:48.773995] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.333 [2024-06-11 08:18:48.774001] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.333 [2024-06-11 08:18:48.774005] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.774008] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fa730) on tqpair=0x21929e0 00:26:18.333 [2024-06-11 08:18:48.774013] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:18.333 [2024-06-11 08:18:48.774018] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:26:18.333 [2024-06-11 08:18:48.774028] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:26:18.333 [2024-06-11 08:18:48.774036] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:26:18.333 [2024-06-11 08:18:48.774045] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.774048] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.774052] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21929e0) 00:26:18.333 [2024-06-11 08:18:48.774058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-06-11 08:18:48.774068] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fa730, cid 0, qid 0 00:26:18.333 [2024-06-11 08:18:48.774298] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:18.333 [2024-06-11 08:18:48.774305] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:18.333 [2024-06-11 08:18:48.774308] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.774312] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21929e0): datao=0, datal=4096, cccid=0 00:26:18.333 [2024-06-11 08:18:48.774317] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21fa730) on tqpair(0x21929e0): expected_datao=0, payload_size=4096 00:26:18.333 [2024-06-11 08:18:48.774325] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.774329] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.774466] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.333 [2024-06-11 08:18:48.774473] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.333 [2024-06-11 08:18:48.774476] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.774480] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fa730) on tqpair=0x21929e0 00:26:18.333 [2024-06-11 08:18:48.774488] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:26:18.333 [2024-06-11 08:18:48.774495] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:26:18.333 [2024-06-11 08:18:48.774500] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:26:18.333 [2024-06-11 08:18:48.774505] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:26:18.333 [2024-06-11 08:18:48.774509] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:26:18.333 [2024-06-11 08:18:48.774514] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:26:18.333 [2024-06-11 08:18:48.774521] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:26:18.333 [2024-06-11 08:18:48.774528] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.774532] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.774535] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21929e0) 00:26:18.333 [2024-06-11 08:18:48.774542] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:18.333 [2024-06-11 08:18:48.774552] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fa730, cid 0, qid 0 00:26:18.333 [2024-06-11 08:18:48.774792] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.333 [2024-06-11 08:18:48.774798] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.333 [2024-06-11 08:18:48.774801] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.774806] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fa730) on tqpair=0x21929e0 00:26:18.333 [2024-06-11 08:18:48.774814] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.774818] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.774821] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21929e0) 00:26:18.333 [2024-06-11 08:18:48.774827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.333 [2024-06-11 08:18:48.774833] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.774837] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.774840] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x21929e0) 00:26:18.333 [2024-06-11 08:18:48.774846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.333 [2024-06-11 08:18:48.774852] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.774855] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.774858] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x21929e0) 00:26:18.333 [2024-06-11 08:18:48.774864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.333 [2024-06-11 08:18:48.774870] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.774873] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.774877] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21929e0) 00:26:18.333 [2024-06-11 08:18:48.774882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.333 [2024-06-11 08:18:48.774887] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:26:18.333 [2024-06-11 08:18:48.774896] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:18.333 [2024-06-11 08:18:48.774902] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.774906] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.774909] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21929e0) 00:26:18.333 [2024-06-11 08:18:48.774916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-06-11 08:18:48.774927] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fa730, cid 0, qid 0 00:26:18.333 [2024-06-11 08:18:48.774932] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fa890, cid 1, qid 0 00:26:18.333 [2024-06-11 08:18:48.774936] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fa9f0, cid 2, qid 0 00:26:18.333 [2024-06-11 08:18:48.774941] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fab50, cid 3, qid 0 00:26:18.333 [2024-06-11 08:18:48.774945] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21facb0, cid 4, qid 0 00:26:18.333 [2024-06-11 08:18:48.775147] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.333 [2024-06-11 08:18:48.775153] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.333 [2024-06-11 08:18:48.775156] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.775160] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21facb0) on tqpair=0x21929e0 00:26:18.333 [2024-06-11 08:18:48.775165] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:26:18.333 [2024-06-11 08:18:48.775172] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:26:18.333 [2024-06-11 08:18:48.775182] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.775186] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.775189] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21929e0) 00:26:18.333 [2024-06-11 08:18:48.775195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-06-11 08:18:48.775205] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21facb0, cid 4, qid 0 00:26:18.333 [2024-06-11 08:18:48.775378] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:18.333 [2024-06-11 08:18:48.775385] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:18.333 [2024-06-11 08:18:48.775388] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.775392] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21929e0): datao=0, datal=4096, cccid=4 00:26:18.333 [2024-06-11 08:18:48.775396] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21facb0) on tqpair(0x21929e0): expected_datao=0, payload_size=4096 00:26:18.333 [2024-06-11 08:18:48.775410] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.775414] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.820448] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.333 [2024-06-11 08:18:48.820460] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.333 [2024-06-11 08:18:48.820463] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.820467] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21facb0) on tqpair=0x21929e0 00:26:18.333 [2024-06-11 08:18:48.820480] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:26:18.333 [2024-06-11 08:18:48.820502] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.820507] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.820510] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21929e0) 00:26:18.333 [2024-06-11 08:18:48.820517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-06-11 08:18:48.820524] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.333 [2024-06-11 08:18:48.820528] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.334 [2024-06-11 08:18:48.820532] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21929e0) 00:26:18.334 [2024-06-11 08:18:48.820538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.334 [2024-06-11 08:18:48.820554] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21facb0, cid 4, qid 0 00:26:18.334 [2024-06-11 08:18:48.820559] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fae10, cid 5, qid 0 00:26:18.334 [2024-06-11 08:18:48.820762] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:18.334 [2024-06-11 08:18:48.820769] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:18.334 [2024-06-11 08:18:48.820772] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:18.334 [2024-06-11 08:18:48.820776] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21929e0): datao=0, datal=1024, cccid=4 00:26:18.334 [2024-06-11 08:18:48.820780] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21facb0) on tqpair(0x21929e0): expected_datao=0, payload_size=1024 00:26:18.334 [2024-06-11 08:18:48.820788] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:18.334 [2024-06-11 08:18:48.820791] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:18.334 [2024-06-11 08:18:48.820802] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.334 [2024-06-11 08:18:48.820808] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.334 [2024-06-11 08:18:48.820811] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.334 [2024-06-11 08:18:48.820815] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fae10) on tqpair=0x21929e0 00:26:18.334 [2024-06-11 08:18:48.861580] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.334 [2024-06-11 08:18:48.861590] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.334 [2024-06-11 08:18:48.861594] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.334 [2024-06-11 08:18:48.861598] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21facb0) on tqpair=0x21929e0 00:26:18.334 [2024-06-11 08:18:48.861610] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.334 [2024-06-11 08:18:48.861614] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.334 [2024-06-11 08:18:48.861618] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21929e0) 00:26:18.334 [2024-06-11 08:18:48.861625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.334 [2024-06-11 08:18:48.861639] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21facb0, cid 4, qid 0 00:26:18.334 [2024-06-11 08:18:48.861904] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:18.334 [2024-06-11 08:18:48.861911] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:18.334 [2024-06-11 08:18:48.861915] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:18.334 [2024-06-11 08:18:48.861919] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21929e0): datao=0, datal=3072, cccid=4 00:26:18.334 [2024-06-11 08:18:48.861923] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21facb0) on tqpair(0x21929e0): expected_datao=0, payload_size=3072 00:26:18.334 [2024-06-11 08:18:48.861930] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:18.334 [2024-06-11 08:18:48.861934] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:18.334 [2024-06-11 08:18:48.862108] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.334 [2024-06-11 08:18:48.862114] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.334 [2024-06-11 08:18:48.862118] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.334 [2024-06-11 08:18:48.862122] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21facb0) on tqpair=0x21929e0 00:26:18.334 [2024-06-11 08:18:48.862130] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.334 [2024-06-11 08:18:48.862134] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.334 [2024-06-11 08:18:48.862138] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21929e0) 00:26:18.334 [2024-06-11 08:18:48.862144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.334 [2024-06-11 08:18:48.862157] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21facb0, cid 4, qid 0 00:26:18.334 [2024-06-11 08:18:48.862391] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:18.334 [2024-06-11 08:18:48.862398] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:18.334 [2024-06-11 08:18:48.862402] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:18.334 [2024-06-11 08:18:48.862405] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21929e0): datao=0, datal=8, cccid=4 00:26:18.334 [2024-06-11 08:18:48.862409] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21facb0) on tqpair(0x21929e0): expected_datao=0, payload_size=8 00:26:18.334 [2024-06-11 08:18:48.862416] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:18.334 [2024-06-11 08:18:48.862420] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:18.334 [2024-06-11 08:18:48.903613] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.334 [2024-06-11 08:18:48.903627] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.334 [2024-06-11 08:18:48.903630] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.334 [2024-06-11 08:18:48.903634] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21facb0) on tqpair=0x21929e0 00:26:18.334 ===================================================== 00:26:18.334 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:18.334 ===================================================== 00:26:18.334 Controller Capabilities/Features 00:26:18.334 ================================ 00:26:18.334 Vendor ID: 0000 00:26:18.334 Subsystem Vendor ID: 0000 00:26:18.334 Serial Number: .................... 00:26:18.334 Model Number: ........................................ 00:26:18.334 Firmware Version: 24.01.1 00:26:18.334 Recommended Arb Burst: 0 00:26:18.334 IEEE OUI Identifier: 00 00 00 00:26:18.334 Multi-path I/O 00:26:18.334 May have multiple subsystem ports: No 00:26:18.334 May have multiple controllers: No 00:26:18.334 Associated with SR-IOV VF: No 00:26:18.334 Max Data Transfer Size: 131072 00:26:18.334 Max Number of Namespaces: 0 00:26:18.334 Max Number of I/O Queues: 1024 00:26:18.334 NVMe Specification Version (VS): 1.3 00:26:18.334 NVMe Specification Version (Identify): 1.3 00:26:18.334 Maximum Queue Entries: 128 00:26:18.334 Contiguous Queues Required: Yes 00:26:18.334 Arbitration Mechanisms Supported 00:26:18.334 Weighted Round Robin: Not Supported 00:26:18.334 Vendor Specific: Not Supported 00:26:18.334 Reset Timeout: 15000 ms 00:26:18.334 Doorbell Stride: 4 bytes 00:26:18.334 NVM Subsystem Reset: Not Supported 00:26:18.334 Command Sets Supported 00:26:18.334 NVM Command Set: Supported 00:26:18.334 Boot Partition: Not Supported 00:26:18.334 Memory Page Size Minimum: 4096 bytes 00:26:18.334 Memory Page Size Maximum: 4096 bytes 00:26:18.334 Persistent Memory Region: Not Supported 00:26:18.334 Optional Asynchronous Events Supported 00:26:18.334 Namespace Attribute Notices: Not Supported 00:26:18.334 Firmware Activation Notices: Not Supported 00:26:18.334 ANA Change Notices: Not Supported 00:26:18.334 PLE Aggregate Log Change Notices: Not Supported 00:26:18.334 LBA Status Info Alert Notices: Not Supported 00:26:18.334 EGE Aggregate Log Change Notices: Not Supported 00:26:18.334 Normal NVM Subsystem Shutdown event: Not Supported 00:26:18.334 Zone Descriptor Change Notices: Not Supported 00:26:18.334 Discovery Log Change Notices: Supported 00:26:18.334 Controller Attributes 00:26:18.334 128-bit Host Identifier: Not Supported 00:26:18.334 Non-Operational Permissive Mode: Not Supported 00:26:18.334 NVM Sets: Not Supported 00:26:18.334 Read Recovery Levels: Not Supported 00:26:18.334 Endurance Groups: Not Supported 00:26:18.334 Predictable Latency Mode: Not Supported 00:26:18.334 Traffic Based Keep ALive: Not Supported 00:26:18.334 Namespace Granularity: Not Supported 00:26:18.334 SQ Associations: Not Supported 00:26:18.334 UUID List: Not Supported 00:26:18.334 Multi-Domain Subsystem: Not Supported 00:26:18.334 Fixed Capacity Management: Not Supported 00:26:18.334 Variable Capacity Management: Not Supported 00:26:18.334 Delete Endurance Group: Not Supported 00:26:18.334 Delete NVM Set: Not Supported 00:26:18.334 Extended LBA Formats Supported: Not Supported 00:26:18.334 Flexible Data Placement Supported: Not Supported 00:26:18.334 00:26:18.334 Controller Memory Buffer Support 00:26:18.334 ================================ 00:26:18.334 Supported: No 00:26:18.334 00:26:18.334 Persistent Memory Region Support 00:26:18.334 ================================ 00:26:18.334 Supported: No 00:26:18.334 00:26:18.334 Admin Command Set Attributes 00:26:18.334 ============================ 00:26:18.334 Security Send/Receive: Not Supported 00:26:18.334 Format NVM: Not Supported 00:26:18.334 Firmware Activate/Download: Not Supported 00:26:18.334 Namespace Management: Not Supported 00:26:18.334 Device Self-Test: Not Supported 00:26:18.334 Directives: Not Supported 00:26:18.334 NVMe-MI: Not Supported 00:26:18.334 Virtualization Management: Not Supported 00:26:18.334 Doorbell Buffer Config: Not Supported 00:26:18.334 Get LBA Status Capability: Not Supported 00:26:18.334 Command & Feature Lockdown Capability: Not Supported 00:26:18.334 Abort Command Limit: 1 00:26:18.334 Async Event Request Limit: 4 00:26:18.334 Number of Firmware Slots: N/A 00:26:18.334 Firmware Slot 1 Read-Only: N/A 00:26:18.334 Firmware Activation Without Reset: N/A 00:26:18.334 Multiple Update Detection Support: N/A 00:26:18.334 Firmware Update Granularity: No Information Provided 00:26:18.334 Per-Namespace SMART Log: No 00:26:18.334 Asymmetric Namespace Access Log Page: Not Supported 00:26:18.334 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:18.334 Command Effects Log Page: Not Supported 00:26:18.334 Get Log Page Extended Data: Supported 00:26:18.334 Telemetry Log Pages: Not Supported 00:26:18.335 Persistent Event Log Pages: Not Supported 00:26:18.335 Supported Log Pages Log Page: May Support 00:26:18.335 Commands Supported & Effects Log Page: Not Supported 00:26:18.335 Feature Identifiers & Effects Log Page:May Support 00:26:18.335 NVMe-MI Commands & Effects Log Page: May Support 00:26:18.335 Data Area 4 for Telemetry Log: Not Supported 00:26:18.335 Error Log Page Entries Supported: 128 00:26:18.335 Keep Alive: Not Supported 00:26:18.335 00:26:18.335 NVM Command Set Attributes 00:26:18.335 ========================== 00:26:18.335 Submission Queue Entry Size 00:26:18.335 Max: 1 00:26:18.335 Min: 1 00:26:18.335 Completion Queue Entry Size 00:26:18.335 Max: 1 00:26:18.335 Min: 1 00:26:18.335 Number of Namespaces: 0 00:26:18.335 Compare Command: Not Supported 00:26:18.335 Write Uncorrectable Command: Not Supported 00:26:18.335 Dataset Management Command: Not Supported 00:26:18.335 Write Zeroes Command: Not Supported 00:26:18.335 Set Features Save Field: Not Supported 00:26:18.335 Reservations: Not Supported 00:26:18.335 Timestamp: Not Supported 00:26:18.335 Copy: Not Supported 00:26:18.335 Volatile Write Cache: Not Present 00:26:18.335 Atomic Write Unit (Normal): 1 00:26:18.335 Atomic Write Unit (PFail): 1 00:26:18.335 Atomic Compare & Write Unit: 1 00:26:18.335 Fused Compare & Write: Supported 00:26:18.335 Scatter-Gather List 00:26:18.335 SGL Command Set: Supported 00:26:18.335 SGL Keyed: Supported 00:26:18.335 SGL Bit Bucket Descriptor: Not Supported 00:26:18.335 SGL Metadata Pointer: Not Supported 00:26:18.335 Oversized SGL: Not Supported 00:26:18.335 SGL Metadata Address: Not Supported 00:26:18.335 SGL Offset: Supported 00:26:18.335 Transport SGL Data Block: Not Supported 00:26:18.335 Replay Protected Memory Block: Not Supported 00:26:18.335 00:26:18.335 Firmware Slot Information 00:26:18.335 ========================= 00:26:18.335 Active slot: 0 00:26:18.335 00:26:18.335 00:26:18.335 Error Log 00:26:18.335 ========= 00:26:18.335 00:26:18.335 Active Namespaces 00:26:18.335 ================= 00:26:18.335 Discovery Log Page 00:26:18.335 ================== 00:26:18.335 Generation Counter: 2 00:26:18.335 Number of Records: 2 00:26:18.335 Record Format: 0 00:26:18.335 00:26:18.335 Discovery Log Entry 0 00:26:18.335 ---------------------- 00:26:18.335 Transport Type: 3 (TCP) 00:26:18.335 Address Family: 1 (IPv4) 00:26:18.335 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:18.335 Entry Flags: 00:26:18.335 Duplicate Returned Information: 1 00:26:18.335 Explicit Persistent Connection Support for Discovery: 1 00:26:18.335 Transport Requirements: 00:26:18.335 Secure Channel: Not Required 00:26:18.335 Port ID: 0 (0x0000) 00:26:18.335 Controller ID: 65535 (0xffff) 00:26:18.335 Admin Max SQ Size: 128 00:26:18.335 Transport Service Identifier: 4420 00:26:18.335 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:18.335 Transport Address: 10.0.0.2 00:26:18.335 Discovery Log Entry 1 00:26:18.335 ---------------------- 00:26:18.335 Transport Type: 3 (TCP) 00:26:18.335 Address Family: 1 (IPv4) 00:26:18.335 Subsystem Type: 2 (NVM Subsystem) 00:26:18.335 Entry Flags: 00:26:18.335 Duplicate Returned Information: 0 00:26:18.335 Explicit Persistent Connection Support for Discovery: 0 00:26:18.335 Transport Requirements: 00:26:18.335 Secure Channel: Not Required 00:26:18.335 Port ID: 0 (0x0000) 00:26:18.335 Controller ID: 65535 (0xffff) 00:26:18.335 Admin Max SQ Size: 128 00:26:18.335 Transport Service Identifier: 4420 00:26:18.335 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:18.335 Transport Address: 10.0.0.2 [2024-06-11 08:18:48.903719] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:26:18.335 [2024-06-11 08:18:48.903732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.335 [2024-06-11 08:18:48.903738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.335 [2024-06-11 08:18:48.903744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.335 [2024-06-11 08:18:48.903750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.335 [2024-06-11 08:18:48.903760] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.335 [2024-06-11 08:18:48.903764] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.335 [2024-06-11 08:18:48.903767] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21929e0) 00:26:18.335 [2024-06-11 08:18:48.903774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.335 [2024-06-11 08:18:48.903788] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fab50, cid 3, qid 0 00:26:18.335 [2024-06-11 08:18:48.903952] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.335 [2024-06-11 08:18:48.903958] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.335 [2024-06-11 08:18:48.903962] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.335 [2024-06-11 08:18:48.903965] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fab50) on tqpair=0x21929e0 00:26:18.335 [2024-06-11 08:18:48.903973] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.335 [2024-06-11 08:18:48.903976] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.335 [2024-06-11 08:18:48.903980] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21929e0) 00:26:18.335 [2024-06-11 08:18:48.903986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.335 [2024-06-11 08:18:48.903999] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fab50, cid 3, qid 0 00:26:18.335 [2024-06-11 08:18:48.904178] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.335 [2024-06-11 08:18:48.904184] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.335 [2024-06-11 08:18:48.904187] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.335 [2024-06-11 08:18:48.904191] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fab50) on tqpair=0x21929e0 00:26:18.335 [2024-06-11 08:18:48.904196] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:26:18.335 [2024-06-11 08:18:48.904201] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:26:18.335 [2024-06-11 08:18:48.904210] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.335 [2024-06-11 08:18:48.904213] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.335 [2024-06-11 08:18:48.904217] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21929e0) 00:26:18.335 [2024-06-11 08:18:48.904224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.335 [2024-06-11 08:18:48.904233] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fab50, cid 3, qid 0 00:26:18.335 [2024-06-11 08:18:48.904391] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.335 [2024-06-11 08:18:48.904399] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.335 [2024-06-11 08:18:48.904402] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.335 [2024-06-11 08:18:48.904406] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fab50) on tqpair=0x21929e0 00:26:18.335 [2024-06-11 08:18:48.904416] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.335 [2024-06-11 08:18:48.904420] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.335 [2024-06-11 08:18:48.904423] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21929e0) 00:26:18.335 [2024-06-11 08:18:48.904430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.335 [2024-06-11 08:18:48.908443] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21fab50, cid 3, qid 0 00:26:18.335 [2024-06-11 08:18:48.908455] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.335 [2024-06-11 08:18:48.908461] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.335 [2024-06-11 08:18:48.908464] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.335 [2024-06-11 08:18:48.908468] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21fab50) on tqpair=0x21929e0 00:26:18.335 [2024-06-11 08:18:48.908476] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:26:18.335 00:26:18.335 08:18:48 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:18.335 [2024-06-11 08:18:48.945093] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:18.335 [2024-06-11 08:18:48.945158] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1182866 ] 00:26:18.335 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.600 [2024-06-11 08:18:48.977020] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:26:18.600 [2024-06-11 08:18:48.977063] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:18.600 [2024-06-11 08:18:48.977067] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:18.600 [2024-06-11 08:18:48.977078] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:18.600 [2024-06-11 08:18:48.977085] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:18.600 [2024-06-11 08:18:48.980470] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:26:18.600 [2024-06-11 08:18:48.980497] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x18bd9e0 0 00:26:18.600 [2024-06-11 08:18:48.988445] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:18.600 [2024-06-11 08:18:48.988454] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:18.600 [2024-06-11 08:18:48.988458] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:18.600 [2024-06-11 08:18:48.988462] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:18.600 [2024-06-11 08:18:48.988494] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.600 [2024-06-11 08:18:48.988500] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.600 [2024-06-11 08:18:48.988504] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18bd9e0) 00:26:18.600 [2024-06-11 08:18:48.988515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:18.600 [2024-06-11 08:18:48.988534] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925730, cid 0, qid 0 00:26:18.600 [2024-06-11 08:18:48.996451] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.600 [2024-06-11 08:18:48.996461] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.600 [2024-06-11 08:18:48.996465] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.600 [2024-06-11 08:18:48.996469] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925730) on tqpair=0x18bd9e0 00:26:18.600 [2024-06-11 08:18:48.996481] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:18.600 [2024-06-11 08:18:48.996487] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:26:18.600 [2024-06-11 08:18:48.996492] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:26:18.600 [2024-06-11 08:18:48.996505] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.600 [2024-06-11 08:18:48.996510] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.600 [2024-06-11 08:18:48.996513] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18bd9e0) 00:26:18.600 [2024-06-11 08:18:48.996521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.600 [2024-06-11 08:18:48.996534] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925730, cid 0, qid 0 00:26:18.600 [2024-06-11 08:18:48.996722] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.600 [2024-06-11 08:18:48.996728] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.600 [2024-06-11 08:18:48.996732] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.996736] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925730) on tqpair=0x18bd9e0 00:26:18.601 [2024-06-11 08:18:48.996743] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:26:18.601 [2024-06-11 08:18:48.996751] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:26:18.601 [2024-06-11 08:18:48.996757] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.996761] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.996764] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18bd9e0) 00:26:18.601 [2024-06-11 08:18:48.996771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.601 [2024-06-11 08:18:48.996781] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925730, cid 0, qid 0 00:26:18.601 [2024-06-11 08:18:48.996976] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.601 [2024-06-11 08:18:48.996983] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.601 [2024-06-11 08:18:48.996986] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.996990] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925730) on tqpair=0x18bd9e0 00:26:18.601 [2024-06-11 08:18:48.996995] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:26:18.601 [2024-06-11 08:18:48.997004] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:26:18.601 [2024-06-11 08:18:48.997010] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.997014] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.997018] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18bd9e0) 00:26:18.601 [2024-06-11 08:18:48.997024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.601 [2024-06-11 08:18:48.997036] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925730, cid 0, qid 0 00:26:18.601 [2024-06-11 08:18:48.997241] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.601 [2024-06-11 08:18:48.997248] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.601 [2024-06-11 08:18:48.997251] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.997255] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925730) on tqpair=0x18bd9e0 00:26:18.601 [2024-06-11 08:18:48.997260] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:18.601 [2024-06-11 08:18:48.997269] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.997273] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.997277] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18bd9e0) 00:26:18.601 [2024-06-11 08:18:48.997283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.601 [2024-06-11 08:18:48.997293] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925730, cid 0, qid 0 00:26:18.601 [2024-06-11 08:18:48.997505] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.601 [2024-06-11 08:18:48.997513] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.601 [2024-06-11 08:18:48.997516] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.997520] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925730) on tqpair=0x18bd9e0 00:26:18.601 [2024-06-11 08:18:48.997525] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:26:18.601 [2024-06-11 08:18:48.997530] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:26:18.601 [2024-06-11 08:18:48.997537] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:18.601 [2024-06-11 08:18:48.997642] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:26:18.601 [2024-06-11 08:18:48.997646] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:18.601 [2024-06-11 08:18:48.997653] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.997657] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.997660] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18bd9e0) 00:26:18.601 [2024-06-11 08:18:48.997667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.601 [2024-06-11 08:18:48.997678] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925730, cid 0, qid 0 00:26:18.601 [2024-06-11 08:18:48.997859] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.601 [2024-06-11 08:18:48.997865] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.601 [2024-06-11 08:18:48.997869] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.997872] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925730) on tqpair=0x18bd9e0 00:26:18.601 [2024-06-11 08:18:48.997878] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:18.601 [2024-06-11 08:18:48.997887] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.997891] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.997894] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18bd9e0) 00:26:18.601 [2024-06-11 08:18:48.997901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.601 [2024-06-11 08:18:48.997912] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925730, cid 0, qid 0 00:26:18.601 [2024-06-11 08:18:48.998126] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.601 [2024-06-11 08:18:48.998133] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.601 [2024-06-11 08:18:48.998136] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.998140] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925730) on tqpair=0x18bd9e0 00:26:18.601 [2024-06-11 08:18:48.998145] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:18.601 [2024-06-11 08:18:48.998149] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:26:18.601 [2024-06-11 08:18:48.998156] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:26:18.601 [2024-06-11 08:18:48.998164] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:26:18.601 [2024-06-11 08:18:48.998172] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.998176] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.998180] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18bd9e0) 00:26:18.601 [2024-06-11 08:18:48.998186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.601 [2024-06-11 08:18:48.998196] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925730, cid 0, qid 0 00:26:18.601 [2024-06-11 08:18:48.998424] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:18.601 [2024-06-11 08:18:48.998431] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:18.601 [2024-06-11 08:18:48.998434] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.998443] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18bd9e0): datao=0, datal=4096, cccid=0 00:26:18.601 [2024-06-11 08:18:48.998448] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1925730) on tqpair(0x18bd9e0): expected_datao=0, payload_size=4096 00:26:18.601 [2024-06-11 08:18:48.998456] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.998460] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.998609] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.601 [2024-06-11 08:18:48.998615] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.601 [2024-06-11 08:18:48.998619] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.998622] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925730) on tqpair=0x18bd9e0 00:26:18.601 [2024-06-11 08:18:48.998630] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:26:18.601 [2024-06-11 08:18:48.998637] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:26:18.601 [2024-06-11 08:18:48.998641] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:26:18.601 [2024-06-11 08:18:48.998645] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:26:18.601 [2024-06-11 08:18:48.998650] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:26:18.601 [2024-06-11 08:18:48.998654] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:26:18.601 [2024-06-11 08:18:48.998663] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:26:18.601 [2024-06-11 08:18:48.998671] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.998675] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.998679] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18bd9e0) 00:26:18.601 [2024-06-11 08:18:48.998686] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:18.601 [2024-06-11 08:18:48.998696] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925730, cid 0, qid 0 00:26:18.601 [2024-06-11 08:18:48.998891] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.601 [2024-06-11 08:18:48.998897] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.601 [2024-06-11 08:18:48.998900] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.998904] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925730) on tqpair=0x18bd9e0 00:26:18.601 [2024-06-11 08:18:48.998911] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.998915] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.998918] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18bd9e0) 00:26:18.601 [2024-06-11 08:18:48.998924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.601 [2024-06-11 08:18:48.998930] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.998934] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.601 [2024-06-11 08:18:48.998937] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x18bd9e0) 00:26:18.602 [2024-06-11 08:18:48.998943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.602 [2024-06-11 08:18:48.998949] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:48.998952] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:48.998956] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x18bd9e0) 00:26:18.602 [2024-06-11 08:18:48.998961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.602 [2024-06-11 08:18:48.998967] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:48.998971] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:48.998974] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18bd9e0) 00:26:18.602 [2024-06-11 08:18:48.998980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.602 [2024-06-11 08:18:48.998984] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:18.602 [2024-06-11 08:18:48.998994] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:18.602 [2024-06-11 08:18:48.999000] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:48.999004] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:48.999007] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18bd9e0) 00:26:18.602 [2024-06-11 08:18:48.999014] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.602 [2024-06-11 08:18:48.999025] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925730, cid 0, qid 0 00:26:18.602 [2024-06-11 08:18:48.999030] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925890, cid 1, qid 0 00:26:18.602 [2024-06-11 08:18:48.999035] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19259f0, cid 2, qid 0 00:26:18.602 [2024-06-11 08:18:48.999041] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925b50, cid 3, qid 0 00:26:18.602 [2024-06-11 08:18:48.999046] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925cb0, cid 4, qid 0 00:26:18.602 [2024-06-11 08:18:48.999230] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.602 [2024-06-11 08:18:48.999237] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.602 [2024-06-11 08:18:48.999240] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:48.999244] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925cb0) on tqpair=0x18bd9e0 00:26:18.602 [2024-06-11 08:18:48.999249] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:26:18.602 [2024-06-11 08:18:48.999254] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:18.602 [2024-06-11 08:18:48.999261] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:26:18.602 [2024-06-11 08:18:48.999267] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:18.602 [2024-06-11 08:18:48.999273] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:48.999277] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:48.999280] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18bd9e0) 00:26:18.602 [2024-06-11 08:18:48.999287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:18.602 [2024-06-11 08:18:48.999296] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925cb0, cid 4, qid 0 00:26:18.602 [2024-06-11 08:18:48.999493] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.602 [2024-06-11 08:18:48.999500] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.602 [2024-06-11 08:18:48.999503] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:48.999507] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925cb0) on tqpair=0x18bd9e0 00:26:18.602 [2024-06-11 08:18:48.999560] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:26:18.602 [2024-06-11 08:18:48.999569] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:18.602 [2024-06-11 08:18:48.999576] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:48.999580] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:48.999583] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18bd9e0) 00:26:18.602 [2024-06-11 08:18:48.999590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.602 [2024-06-11 08:18:48.999599] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925cb0, cid 4, qid 0 00:26:18.602 [2024-06-11 08:18:48.999808] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:18.602 [2024-06-11 08:18:48.999815] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:18.602 [2024-06-11 08:18:48.999818] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:48.999822] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18bd9e0): datao=0, datal=4096, cccid=4 00:26:18.602 [2024-06-11 08:18:48.999826] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1925cb0) on tqpair(0x18bd9e0): expected_datao=0, payload_size=4096 00:26:18.602 [2024-06-11 08:18:48.999833] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:48.999837] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:49.044446] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.602 [2024-06-11 08:18:49.044457] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.602 [2024-06-11 08:18:49.044460] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:49.044464] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925cb0) on tqpair=0x18bd9e0 00:26:18.602 [2024-06-11 08:18:49.044475] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:26:18.602 [2024-06-11 08:18:49.044490] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:26:18.602 [2024-06-11 08:18:49.044499] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:26:18.602 [2024-06-11 08:18:49.044506] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:49.044510] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:49.044513] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18bd9e0) 00:26:18.602 [2024-06-11 08:18:49.044520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.602 [2024-06-11 08:18:49.044532] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925cb0, cid 4, qid 0 00:26:18.602 [2024-06-11 08:18:49.044718] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:18.602 [2024-06-11 08:18:49.044725] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:18.602 [2024-06-11 08:18:49.044728] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:49.044732] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18bd9e0): datao=0, datal=4096, cccid=4 00:26:18.602 [2024-06-11 08:18:49.044736] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1925cb0) on tqpair(0x18bd9e0): expected_datao=0, payload_size=4096 00:26:18.602 [2024-06-11 08:18:49.044743] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:49.044747] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:49.044924] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.602 [2024-06-11 08:18:49.044930] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.602 [2024-06-11 08:18:49.044934] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:49.044937] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925cb0) on tqpair=0x18bd9e0 00:26:18.602 [2024-06-11 08:18:49.044950] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:18.602 [2024-06-11 08:18:49.044959] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:18.602 [2024-06-11 08:18:49.044966] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:49.044970] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:49.044973] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18bd9e0) 00:26:18.602 [2024-06-11 08:18:49.044979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.602 [2024-06-11 08:18:49.044990] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925cb0, cid 4, qid 0 00:26:18.602 [2024-06-11 08:18:49.045213] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:18.602 [2024-06-11 08:18:49.045222] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:18.602 [2024-06-11 08:18:49.045226] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:49.045229] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18bd9e0): datao=0, datal=4096, cccid=4 00:26:18.602 [2024-06-11 08:18:49.045234] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1925cb0) on tqpair(0x18bd9e0): expected_datao=0, payload_size=4096 00:26:18.602 [2024-06-11 08:18:49.045251] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:49.045257] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:49.086606] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.602 [2024-06-11 08:18:49.086619] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.602 [2024-06-11 08:18:49.086623] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.602 [2024-06-11 08:18:49.086627] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925cb0) on tqpair=0x18bd9e0 00:26:18.602 [2024-06-11 08:18:49.086636] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:18.602 [2024-06-11 08:18:49.086643] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:26:18.602 [2024-06-11 08:18:49.086677] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:26:18.602 [2024-06-11 08:18:49.086683] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:18.602 [2024-06-11 08:18:49.086688] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:26:18.602 [2024-06-11 08:18:49.086693] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:26:18.603 [2024-06-11 08:18:49.086698] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:26:18.603 [2024-06-11 08:18:49.086703] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:26:18.603 [2024-06-11 08:18:49.086717] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.086721] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.086725] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18bd9e0) 00:26:18.603 [2024-06-11 08:18:49.086732] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.603 [2024-06-11 08:18:49.086739] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.086742] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.086745] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18bd9e0) 00:26:18.603 [2024-06-11 08:18:49.086752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.603 [2024-06-11 08:18:49.086766] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925cb0, cid 4, qid 0 00:26:18.603 [2024-06-11 08:18:49.086771] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925e10, cid 5, qid 0 00:26:18.603 [2024-06-11 08:18:49.086953] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.603 [2024-06-11 08:18:49.086960] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.603 [2024-06-11 08:18:49.086963] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.086967] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925cb0) on tqpair=0x18bd9e0 00:26:18.603 [2024-06-11 08:18:49.086974] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.603 [2024-06-11 08:18:49.086980] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.603 [2024-06-11 08:18:49.086983] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.086987] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925e10) on tqpair=0x18bd9e0 00:26:18.603 [2024-06-11 08:18:49.086996] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.087003] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.087006] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18bd9e0) 00:26:18.603 [2024-06-11 08:18:49.087013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.603 [2024-06-11 08:18:49.087022] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925e10, cid 5, qid 0 00:26:18.603 [2024-06-11 08:18:49.087200] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.603 [2024-06-11 08:18:49.087206] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.603 [2024-06-11 08:18:49.087209] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.087213] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925e10) on tqpair=0x18bd9e0 00:26:18.603 [2024-06-11 08:18:49.087222] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.087226] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.087229] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18bd9e0) 00:26:18.603 [2024-06-11 08:18:49.087236] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.603 [2024-06-11 08:18:49.087245] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925e10, cid 5, qid 0 00:26:18.603 [2024-06-11 08:18:49.087427] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.603 [2024-06-11 08:18:49.087433] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.603 [2024-06-11 08:18:49.091441] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.091448] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925e10) on tqpair=0x18bd9e0 00:26:18.603 [2024-06-11 08:18:49.091459] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.091462] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.091466] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18bd9e0) 00:26:18.603 [2024-06-11 08:18:49.091472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.603 [2024-06-11 08:18:49.091483] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925e10, cid 5, qid 0 00:26:18.603 [2024-06-11 08:18:49.091666] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.603 [2024-06-11 08:18:49.091673] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.603 [2024-06-11 08:18:49.091676] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.091680] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925e10) on tqpair=0x18bd9e0 00:26:18.603 [2024-06-11 08:18:49.091692] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.091696] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.091700] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18bd9e0) 00:26:18.603 [2024-06-11 08:18:49.091706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.603 [2024-06-11 08:18:49.091713] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.091717] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.091720] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18bd9e0) 00:26:18.603 [2024-06-11 08:18:49.091726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.603 [2024-06-11 08:18:49.091733] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.091739] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.091743] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x18bd9e0) 00:26:18.603 [2024-06-11 08:18:49.091749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.603 [2024-06-11 08:18:49.091756] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.091759] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.091763] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x18bd9e0) 00:26:18.603 [2024-06-11 08:18:49.091769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.603 [2024-06-11 08:18:49.091780] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925e10, cid 5, qid 0 00:26:18.603 [2024-06-11 08:18:49.091785] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925cb0, cid 4, qid 0 00:26:18.603 [2024-06-11 08:18:49.091789] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925f70, cid 6, qid 0 00:26:18.603 [2024-06-11 08:18:49.091794] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19260d0, cid 7, qid 0 00:26:18.603 [2024-06-11 08:18:49.092010] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:18.603 [2024-06-11 08:18:49.092017] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:18.603 [2024-06-11 08:18:49.092020] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.092024] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18bd9e0): datao=0, datal=8192, cccid=5 00:26:18.603 [2024-06-11 08:18:49.092028] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1925e10) on tqpair(0x18bd9e0): expected_datao=0, payload_size=8192 00:26:18.603 [2024-06-11 08:18:49.092100] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.092105] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.092110] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:18.603 [2024-06-11 08:18:49.092116] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:18.603 [2024-06-11 08:18:49.092119] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.092123] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18bd9e0): datao=0, datal=512, cccid=4 00:26:18.603 [2024-06-11 08:18:49.092127] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1925cb0) on tqpair(0x18bd9e0): expected_datao=0, payload_size=512 00:26:18.603 [2024-06-11 08:18:49.092134] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.092138] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.092143] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:18.603 [2024-06-11 08:18:49.092149] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:18.603 [2024-06-11 08:18:49.092152] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.092156] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18bd9e0): datao=0, datal=512, cccid=6 00:26:18.603 [2024-06-11 08:18:49.092160] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1925f70) on tqpair(0x18bd9e0): expected_datao=0, payload_size=512 00:26:18.603 [2024-06-11 08:18:49.092167] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.092170] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.092176] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:18.603 [2024-06-11 08:18:49.092181] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:18.603 [2024-06-11 08:18:49.092185] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.092188] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18bd9e0): datao=0, datal=4096, cccid=7 00:26:18.603 [2024-06-11 08:18:49.092195] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19260d0) on tqpair(0x18bd9e0): expected_datao=0, payload_size=4096 00:26:18.603 [2024-06-11 08:18:49.092202] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.092205] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.092216] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.603 [2024-06-11 08:18:49.092222] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.603 [2024-06-11 08:18:49.092225] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.092229] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925e10) on tqpair=0x18bd9e0 00:26:18.603 [2024-06-11 08:18:49.092244] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.603 [2024-06-11 08:18:49.092250] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.603 [2024-06-11 08:18:49.092253] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.603 [2024-06-11 08:18:49.092257] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925cb0) on tqpair=0x18bd9e0 00:26:18.603 [2024-06-11 08:18:49.092266] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.603 [2024-06-11 08:18:49.092272] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.603 [2024-06-11 08:18:49.092276] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.604 [2024-06-11 08:18:49.092279] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925f70) on tqpair=0x18bd9e0 00:26:18.604 [2024-06-11 08:18:49.092287] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.604 [2024-06-11 08:18:49.092293] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.604 [2024-06-11 08:18:49.092296] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.604 [2024-06-11 08:18:49.092300] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19260d0) on tqpair=0x18bd9e0 00:26:18.604 ===================================================== 00:26:18.604 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:18.604 ===================================================== 00:26:18.604 Controller Capabilities/Features 00:26:18.604 ================================ 00:26:18.604 Vendor ID: 8086 00:26:18.604 Subsystem Vendor ID: 8086 00:26:18.604 Serial Number: SPDK00000000000001 00:26:18.604 Model Number: SPDK bdev Controller 00:26:18.604 Firmware Version: 24.01.1 00:26:18.604 Recommended Arb Burst: 6 00:26:18.604 IEEE OUI Identifier: e4 d2 5c 00:26:18.604 Multi-path I/O 00:26:18.604 May have multiple subsystem ports: Yes 00:26:18.604 May have multiple controllers: Yes 00:26:18.604 Associated with SR-IOV VF: No 00:26:18.604 Max Data Transfer Size: 131072 00:26:18.604 Max Number of Namespaces: 32 00:26:18.604 Max Number of I/O Queues: 127 00:26:18.604 NVMe Specification Version (VS): 1.3 00:26:18.604 NVMe Specification Version (Identify): 1.3 00:26:18.604 Maximum Queue Entries: 128 00:26:18.604 Contiguous Queues Required: Yes 00:26:18.604 Arbitration Mechanisms Supported 00:26:18.604 Weighted Round Robin: Not Supported 00:26:18.604 Vendor Specific: Not Supported 00:26:18.604 Reset Timeout: 15000 ms 00:26:18.604 Doorbell Stride: 4 bytes 00:26:18.604 NVM Subsystem Reset: Not Supported 00:26:18.604 Command Sets Supported 00:26:18.604 NVM Command Set: Supported 00:26:18.604 Boot Partition: Not Supported 00:26:18.604 Memory Page Size Minimum: 4096 bytes 00:26:18.604 Memory Page Size Maximum: 4096 bytes 00:26:18.604 Persistent Memory Region: Not Supported 00:26:18.604 Optional Asynchronous Events Supported 00:26:18.604 Namespace Attribute Notices: Supported 00:26:18.604 Firmware Activation Notices: Not Supported 00:26:18.604 ANA Change Notices: Not Supported 00:26:18.604 PLE Aggregate Log Change Notices: Not Supported 00:26:18.604 LBA Status Info Alert Notices: Not Supported 00:26:18.604 EGE Aggregate Log Change Notices: Not Supported 00:26:18.604 Normal NVM Subsystem Shutdown event: Not Supported 00:26:18.604 Zone Descriptor Change Notices: Not Supported 00:26:18.604 Discovery Log Change Notices: Not Supported 00:26:18.604 Controller Attributes 00:26:18.604 128-bit Host Identifier: Supported 00:26:18.604 Non-Operational Permissive Mode: Not Supported 00:26:18.604 NVM Sets: Not Supported 00:26:18.604 Read Recovery Levels: Not Supported 00:26:18.604 Endurance Groups: Not Supported 00:26:18.604 Predictable Latency Mode: Not Supported 00:26:18.604 Traffic Based Keep ALive: Not Supported 00:26:18.604 Namespace Granularity: Not Supported 00:26:18.604 SQ Associations: Not Supported 00:26:18.604 UUID List: Not Supported 00:26:18.604 Multi-Domain Subsystem: Not Supported 00:26:18.604 Fixed Capacity Management: Not Supported 00:26:18.604 Variable Capacity Management: Not Supported 00:26:18.604 Delete Endurance Group: Not Supported 00:26:18.604 Delete NVM Set: Not Supported 00:26:18.604 Extended LBA Formats Supported: Not Supported 00:26:18.604 Flexible Data Placement Supported: Not Supported 00:26:18.604 00:26:18.604 Controller Memory Buffer Support 00:26:18.604 ================================ 00:26:18.604 Supported: No 00:26:18.604 00:26:18.604 Persistent Memory Region Support 00:26:18.604 ================================ 00:26:18.604 Supported: No 00:26:18.604 00:26:18.604 Admin Command Set Attributes 00:26:18.604 ============================ 00:26:18.604 Security Send/Receive: Not Supported 00:26:18.604 Format NVM: Not Supported 00:26:18.604 Firmware Activate/Download: Not Supported 00:26:18.604 Namespace Management: Not Supported 00:26:18.604 Device Self-Test: Not Supported 00:26:18.604 Directives: Not Supported 00:26:18.604 NVMe-MI: Not Supported 00:26:18.604 Virtualization Management: Not Supported 00:26:18.604 Doorbell Buffer Config: Not Supported 00:26:18.604 Get LBA Status Capability: Not Supported 00:26:18.604 Command & Feature Lockdown Capability: Not Supported 00:26:18.604 Abort Command Limit: 4 00:26:18.604 Async Event Request Limit: 4 00:26:18.604 Number of Firmware Slots: N/A 00:26:18.604 Firmware Slot 1 Read-Only: N/A 00:26:18.604 Firmware Activation Without Reset: N/A 00:26:18.604 Multiple Update Detection Support: N/A 00:26:18.604 Firmware Update Granularity: No Information Provided 00:26:18.604 Per-Namespace SMART Log: No 00:26:18.604 Asymmetric Namespace Access Log Page: Not Supported 00:26:18.604 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:18.604 Command Effects Log Page: Supported 00:26:18.604 Get Log Page Extended Data: Supported 00:26:18.604 Telemetry Log Pages: Not Supported 00:26:18.604 Persistent Event Log Pages: Not Supported 00:26:18.604 Supported Log Pages Log Page: May Support 00:26:18.604 Commands Supported & Effects Log Page: Not Supported 00:26:18.604 Feature Identifiers & Effects Log Page:May Support 00:26:18.604 NVMe-MI Commands & Effects Log Page: May Support 00:26:18.604 Data Area 4 for Telemetry Log: Not Supported 00:26:18.604 Error Log Page Entries Supported: 128 00:26:18.604 Keep Alive: Supported 00:26:18.604 Keep Alive Granularity: 10000 ms 00:26:18.604 00:26:18.604 NVM Command Set Attributes 00:26:18.604 ========================== 00:26:18.604 Submission Queue Entry Size 00:26:18.604 Max: 64 00:26:18.604 Min: 64 00:26:18.604 Completion Queue Entry Size 00:26:18.604 Max: 16 00:26:18.604 Min: 16 00:26:18.604 Number of Namespaces: 32 00:26:18.604 Compare Command: Supported 00:26:18.604 Write Uncorrectable Command: Not Supported 00:26:18.604 Dataset Management Command: Supported 00:26:18.604 Write Zeroes Command: Supported 00:26:18.604 Set Features Save Field: Not Supported 00:26:18.604 Reservations: Supported 00:26:18.604 Timestamp: Not Supported 00:26:18.604 Copy: Supported 00:26:18.604 Volatile Write Cache: Present 00:26:18.604 Atomic Write Unit (Normal): 1 00:26:18.604 Atomic Write Unit (PFail): 1 00:26:18.604 Atomic Compare & Write Unit: 1 00:26:18.604 Fused Compare & Write: Supported 00:26:18.604 Scatter-Gather List 00:26:18.604 SGL Command Set: Supported 00:26:18.604 SGL Keyed: Supported 00:26:18.604 SGL Bit Bucket Descriptor: Not Supported 00:26:18.604 SGL Metadata Pointer: Not Supported 00:26:18.604 Oversized SGL: Not Supported 00:26:18.604 SGL Metadata Address: Not Supported 00:26:18.604 SGL Offset: Supported 00:26:18.604 Transport SGL Data Block: Not Supported 00:26:18.604 Replay Protected Memory Block: Not Supported 00:26:18.604 00:26:18.604 Firmware Slot Information 00:26:18.604 ========================= 00:26:18.604 Active slot: 1 00:26:18.604 Slot 1 Firmware Revision: 24.01.1 00:26:18.604 00:26:18.604 00:26:18.604 Commands Supported and Effects 00:26:18.604 ============================== 00:26:18.604 Admin Commands 00:26:18.604 -------------- 00:26:18.604 Get Log Page (02h): Supported 00:26:18.604 Identify (06h): Supported 00:26:18.604 Abort (08h): Supported 00:26:18.604 Set Features (09h): Supported 00:26:18.604 Get Features (0Ah): Supported 00:26:18.604 Asynchronous Event Request (0Ch): Supported 00:26:18.604 Keep Alive (18h): Supported 00:26:18.604 I/O Commands 00:26:18.604 ------------ 00:26:18.604 Flush (00h): Supported LBA-Change 00:26:18.604 Write (01h): Supported LBA-Change 00:26:18.604 Read (02h): Supported 00:26:18.604 Compare (05h): Supported 00:26:18.604 Write Zeroes (08h): Supported LBA-Change 00:26:18.604 Dataset Management (09h): Supported LBA-Change 00:26:18.604 Copy (19h): Supported LBA-Change 00:26:18.604 Unknown (79h): Supported LBA-Change 00:26:18.604 Unknown (7Ah): Supported 00:26:18.604 00:26:18.604 Error Log 00:26:18.604 ========= 00:26:18.604 00:26:18.604 Arbitration 00:26:18.604 =========== 00:26:18.604 Arbitration Burst: 1 00:26:18.604 00:26:18.604 Power Management 00:26:18.604 ================ 00:26:18.604 Number of Power States: 1 00:26:18.604 Current Power State: Power State #0 00:26:18.604 Power State #0: 00:26:18.604 Max Power: 0.00 W 00:26:18.604 Non-Operational State: Operational 00:26:18.604 Entry Latency: Not Reported 00:26:18.604 Exit Latency: Not Reported 00:26:18.604 Relative Read Throughput: 0 00:26:18.604 Relative Read Latency: 0 00:26:18.604 Relative Write Throughput: 0 00:26:18.604 Relative Write Latency: 0 00:26:18.604 Idle Power: Not Reported 00:26:18.604 Active Power: Not Reported 00:26:18.604 Non-Operational Permissive Mode: Not Supported 00:26:18.604 00:26:18.604 Health Information 00:26:18.604 ================== 00:26:18.604 Critical Warnings: 00:26:18.605 Available Spare Space: OK 00:26:18.605 Temperature: OK 00:26:18.605 Device Reliability: OK 00:26:18.605 Read Only: No 00:26:18.605 Volatile Memory Backup: OK 00:26:18.605 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:18.605 Temperature Threshold: [2024-06-11 08:18:49.092404] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.605 [2024-06-11 08:18:49.092409] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.605 [2024-06-11 08:18:49.092412] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x18bd9e0) 00:26:18.605 [2024-06-11 08:18:49.092419] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.605 [2024-06-11 08:18:49.092430] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19260d0, cid 7, qid 0 00:26:18.605 [2024-06-11 08:18:49.092647] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.605 [2024-06-11 08:18:49.092654] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.605 [2024-06-11 08:18:49.092658] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.605 [2024-06-11 08:18:49.092661] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19260d0) on tqpair=0x18bd9e0 00:26:18.605 [2024-06-11 08:18:49.092692] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:26:18.605 [2024-06-11 08:18:49.092703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.605 [2024-06-11 08:18:49.092709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.605 [2024-06-11 08:18:49.092715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.605 [2024-06-11 08:18:49.092721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.606 [2024-06-11 08:18:49.092729] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.092733] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.092736] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18bd9e0) 00:26:18.606 [2024-06-11 08:18:49.092745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.606 [2024-06-11 08:18:49.092756] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925b50, cid 3, qid 0 00:26:18.606 [2024-06-11 08:18:49.092948] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.606 [2024-06-11 08:18:49.092954] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.606 [2024-06-11 08:18:49.092958] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.092962] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925b50) on tqpair=0x18bd9e0 00:26:18.606 [2024-06-11 08:18:49.092969] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.092972] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.092976] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18bd9e0) 00:26:18.606 [2024-06-11 08:18:49.092982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.606 [2024-06-11 08:18:49.092995] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925b50, cid 3, qid 0 00:26:18.606 [2024-06-11 08:18:49.093215] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.606 [2024-06-11 08:18:49.093221] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.606 [2024-06-11 08:18:49.093225] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.093228] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925b50) on tqpair=0x18bd9e0 00:26:18.606 [2024-06-11 08:18:49.093234] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:26:18.606 [2024-06-11 08:18:49.093238] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:26:18.606 [2024-06-11 08:18:49.093247] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.093251] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.093254] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18bd9e0) 00:26:18.606 [2024-06-11 08:18:49.093261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.606 [2024-06-11 08:18:49.093270] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925b50, cid 3, qid 0 00:26:18.606 [2024-06-11 08:18:49.093502] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.606 [2024-06-11 08:18:49.093509] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.606 [2024-06-11 08:18:49.093512] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.093516] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925b50) on tqpair=0x18bd9e0 00:26:18.606 [2024-06-11 08:18:49.093526] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.093530] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.093533] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18bd9e0) 00:26:18.606 [2024-06-11 08:18:49.093540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.606 [2024-06-11 08:18:49.093550] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925b50, cid 3, qid 0 00:26:18.606 [2024-06-11 08:18:49.093752] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.606 [2024-06-11 08:18:49.093759] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.606 [2024-06-11 08:18:49.093762] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.093766] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925b50) on tqpair=0x18bd9e0 00:26:18.606 [2024-06-11 08:18:49.093776] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.093781] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.093785] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18bd9e0) 00:26:18.606 [2024-06-11 08:18:49.093791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.606 [2024-06-11 08:18:49.093801] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925b50, cid 3, qid 0 00:26:18.606 [2024-06-11 08:18:49.094006] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.606 [2024-06-11 08:18:49.094012] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.606 [2024-06-11 08:18:49.094016] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.094019] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925b50) on tqpair=0x18bd9e0 00:26:18.606 [2024-06-11 08:18:49.094029] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.094033] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.094037] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18bd9e0) 00:26:18.606 [2024-06-11 08:18:49.094043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.606 [2024-06-11 08:18:49.094053] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925b50, cid 3, qid 0 00:26:18.606 [2024-06-11 08:18:49.094252] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.606 [2024-06-11 08:18:49.094258] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.606 [2024-06-11 08:18:49.094261] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.094265] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925b50) on tqpair=0x18bd9e0 00:26:18.606 [2024-06-11 08:18:49.094275] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.094279] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.094282] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18bd9e0) 00:26:18.606 [2024-06-11 08:18:49.094289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.606 [2024-06-11 08:18:49.094298] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925b50, cid 3, qid 0 00:26:18.606 [2024-06-11 08:18:49.094512] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.606 [2024-06-11 08:18:49.094520] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.606 [2024-06-11 08:18:49.094523] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.094527] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925b50) on tqpair=0x18bd9e0 00:26:18.606 [2024-06-11 08:18:49.094537] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.094541] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.606 [2024-06-11 08:18:49.094544] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18bd9e0) 00:26:18.606 [2024-06-11 08:18:49.094551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.606 [2024-06-11 08:18:49.094561] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925b50, cid 3, qid 0 00:26:18.606 [2024-06-11 08:18:49.094760] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.607 [2024-06-11 08:18:49.094767] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.607 [2024-06-11 08:18:49.094770] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.607 [2024-06-11 08:18:49.094774] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925b50) on tqpair=0x18bd9e0 00:26:18.607 [2024-06-11 08:18:49.094783] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.607 [2024-06-11 08:18:49.094789] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.607 [2024-06-11 08:18:49.094793] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18bd9e0) 00:26:18.607 [2024-06-11 08:18:49.094800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.607 [2024-06-11 08:18:49.094809] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925b50, cid 3, qid 0 00:26:18.607 [2024-06-11 08:18:49.095065] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.607 [2024-06-11 08:18:49.095071] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.607 [2024-06-11 08:18:49.095075] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.607 [2024-06-11 08:18:49.095078] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925b50) on tqpair=0x18bd9e0 00:26:18.607 [2024-06-11 08:18:49.095088] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.607 [2024-06-11 08:18:49.095092] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.607 [2024-06-11 08:18:49.095096] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18bd9e0) 00:26:18.607 [2024-06-11 08:18:49.095102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.607 [2024-06-11 08:18:49.095111] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925b50, cid 3, qid 0 00:26:18.607 [2024-06-11 08:18:49.095298] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.607 [2024-06-11 08:18:49.095305] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.607 [2024-06-11 08:18:49.095308] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.607 [2024-06-11 08:18:49.095311] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925b50) on tqpair=0x18bd9e0 00:26:18.607 [2024-06-11 08:18:49.095321] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:18.607 [2024-06-11 08:18:49.095325] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:18.607 [2024-06-11 08:18:49.095329] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18bd9e0) 00:26:18.607 [2024-06-11 08:18:49.095335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.607 [2024-06-11 08:18:49.095345] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1925b50, cid 3, qid 0 00:26:18.607 [2024-06-11 08:18:49.099449] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:18.607 [2024-06-11 08:18:49.099457] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:18.607 [2024-06-11 08:18:49.099461] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:18.607 [2024-06-11 08:18:49.099465] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1925b50) on tqpair=0x18bd9e0 00:26:18.607 [2024-06-11 08:18:49.099473] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:26:18.607 0 Kelvin (-273 Celsius) 00:26:18.607 Available Spare: 0% 00:26:18.607 Available Spare Threshold: 0% 00:26:18.607 Life Percentage Used: 0% 00:26:18.607 Data Units Read: 0 00:26:18.607 Data Units Written: 0 00:26:18.607 Host Read Commands: 0 00:26:18.607 Host Write Commands: 0 00:26:18.607 Controller Busy Time: 0 minutes 00:26:18.607 Power Cycles: 0 00:26:18.607 Power On Hours: 0 hours 00:26:18.607 Unsafe Shutdowns: 0 00:26:18.607 Unrecoverable Media Errors: 0 00:26:18.607 Lifetime Error Log Entries: 0 00:26:18.607 Warning Temperature Time: 0 minutes 00:26:18.607 Critical Temperature Time: 0 minutes 00:26:18.607 00:26:18.607 Number of Queues 00:26:18.607 ================ 00:26:18.607 Number of I/O Submission Queues: 127 00:26:18.607 Number of I/O Completion Queues: 127 00:26:18.607 00:26:18.607 Active Namespaces 00:26:18.607 ================= 00:26:18.607 Namespace ID:1 00:26:18.607 Error Recovery Timeout: Unlimited 00:26:18.607 Command Set Identifier: NVM (00h) 00:26:18.607 Deallocate: Supported 00:26:18.607 Deallocated/Unwritten Error: Not Supported 00:26:18.607 Deallocated Read Value: Unknown 00:26:18.607 Deallocate in Write Zeroes: Not Supported 00:26:18.607 Deallocated Guard Field: 0xFFFF 00:26:18.607 Flush: Supported 00:26:18.607 Reservation: Supported 00:26:18.607 Namespace Sharing Capabilities: Multiple Controllers 00:26:18.607 Size (in LBAs): 131072 (0GiB) 00:26:18.607 Capacity (in LBAs): 131072 (0GiB) 00:26:18.607 Utilization (in LBAs): 131072 (0GiB) 00:26:18.607 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:18.607 EUI64: ABCDEF0123456789 00:26:18.607 UUID: 1bc5e50b-1a70-469c-aa1d-759859fd5e6c 00:26:18.607 Thin Provisioning: Not Supported 00:26:18.607 Per-NS Atomic Units: Yes 00:26:18.607 Atomic Boundary Size (Normal): 0 00:26:18.607 Atomic Boundary Size (PFail): 0 00:26:18.607 Atomic Boundary Offset: 0 00:26:18.607 Maximum Single Source Range Length: 65535 00:26:18.607 Maximum Copy Length: 65535 00:26:18.607 Maximum Source Range Count: 1 00:26:18.607 NGUID/EUI64 Never Reused: No 00:26:18.607 Namespace Write Protected: No 00:26:18.607 Number of LBA Formats: 1 00:26:18.607 Current LBA Format: LBA Format #00 00:26:18.607 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:18.607 00:26:18.607 08:18:49 -- host/identify.sh@51 -- # sync 00:26:18.607 08:18:49 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:18.607 08:18:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.607 08:18:49 -- common/autotest_common.sh@10 -- # set +x 00:26:18.607 08:18:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.607 08:18:49 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:18.607 08:18:49 -- host/identify.sh@56 -- # nvmftestfini 00:26:18.607 08:18:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:18.607 08:18:49 -- nvmf/common.sh@116 -- # sync 00:26:18.607 08:18:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:18.607 08:18:49 -- nvmf/common.sh@119 -- # set +e 00:26:18.607 08:18:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:18.607 08:18:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:18.607 rmmod nvme_tcp 00:26:18.607 rmmod nvme_fabrics 00:26:18.607 rmmod nvme_keyring 00:26:18.607 08:18:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:18.607 08:18:49 -- nvmf/common.sh@123 -- # set -e 00:26:18.607 08:18:49 -- nvmf/common.sh@124 -- # return 0 00:26:18.607 08:18:49 -- nvmf/common.sh@477 -- # '[' -n 1182534 ']' 00:26:18.607 08:18:49 -- nvmf/common.sh@478 -- # killprocess 1182534 00:26:18.607 08:18:49 -- common/autotest_common.sh@926 -- # '[' -z 1182534 ']' 00:26:18.607 08:18:49 -- common/autotest_common.sh@930 -- # kill -0 1182534 00:26:18.607 08:18:49 -- common/autotest_common.sh@931 -- # uname 00:26:18.607 08:18:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:18.607 08:18:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1182534 00:26:18.868 08:18:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:18.868 08:18:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:18.868 08:18:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1182534' 00:26:18.868 killing process with pid 1182534 00:26:18.868 08:18:49 -- common/autotest_common.sh@945 -- # kill 1182534 00:26:18.868 [2024-06-11 08:18:49.273542] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:18.868 08:18:49 -- common/autotest_common.sh@950 -- # wait 1182534 00:26:18.868 08:18:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:18.868 08:18:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:18.868 08:18:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:18.868 08:18:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:18.868 08:18:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:18.868 08:18:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.868 08:18:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:18.868 08:18:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.412 08:18:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:21.412 00:26:21.412 real 0m10.606s 00:26:21.412 user 0m7.824s 00:26:21.412 sys 0m5.397s 00:26:21.412 08:18:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.412 08:18:51 -- common/autotest_common.sh@10 -- # set +x 00:26:21.412 ************************************ 00:26:21.412 END TEST nvmf_identify 00:26:21.412 ************************************ 00:26:21.412 08:18:51 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:21.412 08:18:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:21.412 08:18:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:21.412 08:18:51 -- common/autotest_common.sh@10 -- # set +x 00:26:21.412 ************************************ 00:26:21.412 START TEST nvmf_perf 00:26:21.412 ************************************ 00:26:21.412 08:18:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:21.412 * Looking for test storage... 00:26:21.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:21.412 08:18:51 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.412 08:18:51 -- nvmf/common.sh@7 -- # uname -s 00:26:21.412 08:18:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.412 08:18:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.412 08:18:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.412 08:18:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.412 08:18:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.412 08:18:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.412 08:18:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.412 08:18:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.412 08:18:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.412 08:18:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.412 08:18:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:21.412 08:18:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:21.412 08:18:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.412 08:18:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.412 08:18:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.412 08:18:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.412 08:18:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.412 08:18:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.412 08:18:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.412 08:18:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.412 08:18:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.412 08:18:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.412 08:18:51 -- paths/export.sh@5 -- # export PATH 00:26:21.412 08:18:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.412 08:18:51 -- nvmf/common.sh@46 -- # : 0 00:26:21.412 08:18:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:21.412 08:18:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:21.412 08:18:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:21.412 08:18:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.412 08:18:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.412 08:18:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:21.412 08:18:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:21.412 08:18:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:21.412 08:18:51 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:21.412 08:18:51 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:21.412 08:18:51 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:21.412 08:18:51 -- host/perf.sh@17 -- # nvmftestinit 00:26:21.412 08:18:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:21.412 08:18:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.412 08:18:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:21.412 08:18:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:21.412 08:18:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:21.412 08:18:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.412 08:18:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:21.412 08:18:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.412 08:18:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:21.412 08:18:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:21.412 08:18:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:21.412 08:18:51 -- common/autotest_common.sh@10 -- # set +x 00:26:28.014 08:18:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:28.014 08:18:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:28.014 08:18:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:28.014 08:18:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:28.014 08:18:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:28.014 08:18:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:28.014 08:18:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:28.014 08:18:58 -- nvmf/common.sh@294 -- # net_devs=() 00:26:28.014 08:18:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:28.014 08:18:58 -- nvmf/common.sh@295 -- # e810=() 00:26:28.014 08:18:58 -- nvmf/common.sh@295 -- # local -ga e810 00:26:28.014 08:18:58 -- nvmf/common.sh@296 -- # x722=() 00:26:28.014 08:18:58 -- nvmf/common.sh@296 -- # local -ga x722 00:26:28.014 08:18:58 -- nvmf/common.sh@297 -- # mlx=() 00:26:28.014 08:18:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:28.014 08:18:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.014 08:18:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.014 08:18:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.014 08:18:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.014 08:18:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.014 08:18:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.014 08:18:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.014 08:18:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.014 08:18:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.014 08:18:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.014 08:18:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.014 08:18:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:28.014 08:18:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:28.014 08:18:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:28.014 08:18:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:28.014 08:18:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:28.014 08:18:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:28.014 08:18:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:28.014 08:18:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:28.014 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:28.014 08:18:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:28.014 08:18:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:28.014 08:18:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.014 08:18:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.015 08:18:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:28.015 08:18:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:28.015 08:18:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:28.015 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:28.015 08:18:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:28.015 08:18:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:28.015 08:18:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.015 08:18:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.015 08:18:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:28.015 08:18:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:28.015 08:18:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:28.015 08:18:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:28.015 08:18:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:28.015 08:18:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.015 08:18:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:28.015 08:18:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.015 08:18:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:28.015 Found net devices under 0000:31:00.0: cvl_0_0 00:26:28.015 08:18:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.015 08:18:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:28.015 08:18:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.015 08:18:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:28.015 08:18:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.015 08:18:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:28.015 Found net devices under 0000:31:00.1: cvl_0_1 00:26:28.015 08:18:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.015 08:18:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:28.015 08:18:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:28.015 08:18:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:28.015 08:18:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:28.015 08:18:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:28.015 08:18:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.015 08:18:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.015 08:18:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.015 08:18:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:28.015 08:18:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.015 08:18:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.015 08:18:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:28.015 08:18:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.015 08:18:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.015 08:18:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:28.015 08:18:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:28.015 08:18:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.015 08:18:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.276 08:18:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.276 08:18:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.276 08:18:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:28.276 08:18:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.276 08:18:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:28.276 08:18:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.276 08:18:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:28.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:26:28.276 00:26:28.276 --- 10.0.0.2 ping statistics --- 00:26:28.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.276 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:26:28.276 08:18:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:26:28.276 00:26:28.276 --- 10.0.0.1 ping statistics --- 00:26:28.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.276 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:26:28.276 08:18:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.276 08:18:58 -- nvmf/common.sh@410 -- # return 0 00:26:28.276 08:18:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:28.276 08:18:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.276 08:18:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:28.276 08:18:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:28.276 08:18:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.276 08:18:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:28.276 08:18:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:28.276 08:18:58 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:28.276 08:18:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:28.277 08:18:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:28.277 08:18:58 -- common/autotest_common.sh@10 -- # set +x 00:26:28.277 08:18:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:28.277 08:18:58 -- nvmf/common.sh@469 -- # nvmfpid=1186967 00:26:28.277 08:18:58 -- nvmf/common.sh@470 -- # waitforlisten 1186967 00:26:28.277 08:18:58 -- common/autotest_common.sh@819 -- # '[' -z 1186967 ']' 00:26:28.277 08:18:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.277 08:18:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:28.277 08:18:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.277 08:18:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:28.277 08:18:58 -- common/autotest_common.sh@10 -- # set +x 00:26:28.537 [2024-06-11 08:18:58.956766] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:28.537 [2024-06-11 08:18:58.956830] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.537 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.537 [2024-06-11 08:18:59.028145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:28.537 [2024-06-11 08:18:59.101369] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:28.537 [2024-06-11 08:18:59.101510] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.537 [2024-06-11 08:18:59.101520] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.537 [2024-06-11 08:18:59.101529] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.537 [2024-06-11 08:18:59.101851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.537 [2024-06-11 08:18:59.101934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:28.537 [2024-06-11 08:18:59.102082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.537 [2024-06-11 08:18:59.102083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:29.108 08:18:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:29.108 08:18:59 -- common/autotest_common.sh@852 -- # return 0 00:26:29.108 08:18:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:29.108 08:18:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:29.108 08:18:59 -- common/autotest_common.sh@10 -- # set +x 00:26:29.368 08:18:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.368 08:18:59 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:29.368 08:18:59 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:29.628 08:19:00 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:29.628 08:19:00 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:29.888 08:19:00 -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:26:29.888 08:19:00 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:30.148 08:19:00 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:30.148 08:19:00 -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:26:30.148 08:19:00 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:30.148 08:19:00 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:30.148 08:19:00 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:30.148 [2024-06-11 08:19:00.724526] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:30.148 08:19:00 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:30.408 08:19:00 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:30.408 08:19:00 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:30.669 08:19:01 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:30.669 08:19:01 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:30.669 08:19:01 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:30.929 [2024-06-11 08:19:01.374980] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:30.930 08:19:01 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:30.930 08:19:01 -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:26:30.930 08:19:01 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:30.930 08:19:01 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:30.930 08:19:01 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:32.310 Initializing NVMe Controllers 00:26:32.310 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:26:32.310 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:26:32.310 Initialization complete. Launching workers. 00:26:32.310 ======================================================== 00:26:32.310 Latency(us) 00:26:32.310 Device Information : IOPS MiB/s Average min max 00:26:32.310 PCIE (0000:65:00.0) NSID 1 from core 0: 81006.54 316.43 394.48 13.10 5017.41 00:26:32.310 ======================================================== 00:26:32.310 Total : 81006.54 316.43 394.48 13.10 5017.41 00:26:32.310 00:26:32.310 08:19:02 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:32.310 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.693 Initializing NVMe Controllers 00:26:33.693 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:33.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:33.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:33.693 Initialization complete. Launching workers. 00:26:33.693 ======================================================== 00:26:33.693 Latency(us) 00:26:33.693 Device Information : IOPS MiB/s Average min max 00:26:33.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 94.00 0.37 11026.97 105.72 45604.76 00:26:33.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17939.56 4987.42 49883.65 00:26:33.693 ======================================================== 00:26:33.693 Total : 150.00 0.59 13607.67 105.72 49883.65 00:26:33.693 00:26:33.693 08:19:04 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:33.693 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.076 Initializing NVMe Controllers 00:26:35.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:35.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:35.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:35.076 Initialization complete. Launching workers. 00:26:35.076 ======================================================== 00:26:35.076 Latency(us) 00:26:35.076 Device Information : IOPS MiB/s Average min max 00:26:35.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10578.73 41.32 3026.15 460.20 6611.57 00:26:35.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3912.53 15.28 8222.83 7083.99 15698.98 00:26:35.076 ======================================================== 00:26:35.076 Total : 14491.26 56.61 4429.21 460.20 15698.98 00:26:35.076 00:26:35.076 08:19:05 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:35.076 08:19:05 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:35.076 08:19:05 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:35.336 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.883 Initializing NVMe Controllers 00:26:37.883 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:37.883 Controller IO queue size 128, less than required. 00:26:37.883 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:37.883 Controller IO queue size 128, less than required. 00:26:37.883 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:37.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:37.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:37.883 Initialization complete. Launching workers. 00:26:37.883 ======================================================== 00:26:37.883 Latency(us) 00:26:37.883 Device Information : IOPS MiB/s Average min max 00:26:37.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1596.71 399.18 82112.21 47789.29 120675.21 00:26:37.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 581.98 145.50 227275.14 77563.66 326655.86 00:26:37.883 ======================================================== 00:26:37.883 Total : 2178.69 544.67 120888.84 47789.29 326655.86 00:26:37.883 00:26:37.883 08:19:08 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:37.883 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.883 No valid NVMe controllers or AIO or URING devices found 00:26:37.883 Initializing NVMe Controllers 00:26:37.883 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:37.883 Controller IO queue size 128, less than required. 00:26:37.883 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:37.883 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:37.883 Controller IO queue size 128, less than required. 00:26:37.883 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:37.883 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:37.883 WARNING: Some requested NVMe devices were skipped 00:26:37.883 08:19:08 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:37.883 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.430 Initializing NVMe Controllers 00:26:40.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:40.431 Controller IO queue size 128, less than required. 00:26:40.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:40.431 Controller IO queue size 128, less than required. 00:26:40.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:40.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:40.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:40.431 Initialization complete. Launching workers. 00:26:40.431 00:26:40.431 ==================== 00:26:40.431 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:40.431 TCP transport: 00:26:40.431 polls: 23024 00:26:40.431 idle_polls: 13390 00:26:40.431 sock_completions: 9634 00:26:40.431 nvme_completions: 6754 00:26:40.431 submitted_requests: 10314 00:26:40.431 queued_requests: 1 00:26:40.431 00:26:40.431 ==================== 00:26:40.431 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:40.431 TCP transport: 00:26:40.431 polls: 20303 00:26:40.431 idle_polls: 10487 00:26:40.431 sock_completions: 9816 00:26:40.431 nvme_completions: 6466 00:26:40.431 submitted_requests: 9874 00:26:40.431 queued_requests: 1 00:26:40.431 ======================================================== 00:26:40.431 Latency(us) 00:26:40.431 Device Information : IOPS MiB/s Average min max 00:26:40.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1751.86 437.96 74772.18 49342.19 125659.38 00:26:40.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1679.87 419.97 77196.66 32400.44 132990.94 00:26:40.431 ======================================================== 00:26:40.431 Total : 3431.73 857.93 75958.99 32400.44 132990.94 00:26:40.431 00:26:40.431 08:19:10 -- host/perf.sh@66 -- # sync 00:26:40.431 08:19:10 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:40.431 08:19:10 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:26:40.431 08:19:10 -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:26:40.431 08:19:10 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:26:41.374 08:19:12 -- host/perf.sh@72 -- # ls_guid=f1a50631-7f63-4104-8180-cf87696ce8e3 00:26:41.636 08:19:12 -- host/perf.sh@73 -- # get_lvs_free_mb f1a50631-7f63-4104-8180-cf87696ce8e3 00:26:41.636 08:19:12 -- common/autotest_common.sh@1343 -- # local lvs_uuid=f1a50631-7f63-4104-8180-cf87696ce8e3 00:26:41.636 08:19:12 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:41.636 08:19:12 -- common/autotest_common.sh@1345 -- # local fc 00:26:41.636 08:19:12 -- common/autotest_common.sh@1346 -- # local cs 00:26:41.636 08:19:12 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:41.636 08:19:12 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:41.636 { 00:26:41.636 "uuid": "f1a50631-7f63-4104-8180-cf87696ce8e3", 00:26:41.636 "name": "lvs_0", 00:26:41.636 "base_bdev": "Nvme0n1", 00:26:41.636 "total_data_clusters": 457407, 00:26:41.636 "free_clusters": 457407, 00:26:41.636 "block_size": 512, 00:26:41.636 "cluster_size": 4194304 00:26:41.636 } 00:26:41.636 ]' 00:26:41.636 08:19:12 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="f1a50631-7f63-4104-8180-cf87696ce8e3") .free_clusters' 00:26:41.636 08:19:12 -- common/autotest_common.sh@1348 -- # fc=457407 00:26:41.636 08:19:12 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="f1a50631-7f63-4104-8180-cf87696ce8e3") .cluster_size' 00:26:41.896 08:19:12 -- common/autotest_common.sh@1349 -- # cs=4194304 00:26:41.896 08:19:12 -- common/autotest_common.sh@1352 -- # free_mb=1829628 00:26:41.896 08:19:12 -- common/autotest_common.sh@1353 -- # echo 1829628 00:26:41.896 1829628 00:26:41.896 08:19:12 -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:26:41.896 08:19:12 -- host/perf.sh@78 -- # free_mb=20480 00:26:41.896 08:19:12 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f1a50631-7f63-4104-8180-cf87696ce8e3 lbd_0 20480 00:26:41.896 08:19:12 -- host/perf.sh@80 -- # lb_guid=3640e2e7-04ab-4ed0-88ba-e4c944b37906 00:26:41.896 08:19:12 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 3640e2e7-04ab-4ed0-88ba-e4c944b37906 lvs_n_0 00:26:43.809 08:19:14 -- host/perf.sh@83 -- # ls_nested_guid=29ef023c-1c19-4fc2-8c20-3402bf24d3cd 00:26:43.809 08:19:14 -- host/perf.sh@84 -- # get_lvs_free_mb 29ef023c-1c19-4fc2-8c20-3402bf24d3cd 00:26:43.809 08:19:14 -- common/autotest_common.sh@1343 -- # local lvs_uuid=29ef023c-1c19-4fc2-8c20-3402bf24d3cd 00:26:43.809 08:19:14 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:43.809 08:19:14 -- common/autotest_common.sh@1345 -- # local fc 00:26:43.809 08:19:14 -- common/autotest_common.sh@1346 -- # local cs 00:26:43.809 08:19:14 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:43.809 08:19:14 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:43.809 { 00:26:43.809 "uuid": "f1a50631-7f63-4104-8180-cf87696ce8e3", 00:26:43.809 "name": "lvs_0", 00:26:43.809 "base_bdev": "Nvme0n1", 00:26:43.809 "total_data_clusters": 457407, 00:26:43.809 "free_clusters": 452287, 00:26:43.809 "block_size": 512, 00:26:43.809 "cluster_size": 4194304 00:26:43.809 }, 00:26:43.809 { 00:26:43.809 "uuid": "29ef023c-1c19-4fc2-8c20-3402bf24d3cd", 00:26:43.809 "name": "lvs_n_0", 00:26:43.809 "base_bdev": "3640e2e7-04ab-4ed0-88ba-e4c944b37906", 00:26:43.809 "total_data_clusters": 5114, 00:26:43.809 "free_clusters": 5114, 00:26:43.809 "block_size": 512, 00:26:43.809 "cluster_size": 4194304 00:26:43.809 } 00:26:43.809 ]' 00:26:43.809 08:19:14 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="29ef023c-1c19-4fc2-8c20-3402bf24d3cd") .free_clusters' 00:26:43.809 08:19:14 -- common/autotest_common.sh@1348 -- # fc=5114 00:26:43.809 08:19:14 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="29ef023c-1c19-4fc2-8c20-3402bf24d3cd") .cluster_size' 00:26:43.809 08:19:14 -- common/autotest_common.sh@1349 -- # cs=4194304 00:26:43.809 08:19:14 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:26:43.809 08:19:14 -- common/autotest_common.sh@1353 -- # echo 20456 00:26:43.809 20456 00:26:43.809 08:19:14 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:26:43.809 08:19:14 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 29ef023c-1c19-4fc2-8c20-3402bf24d3cd lbd_nest_0 20456 00:26:44.070 08:19:14 -- host/perf.sh@88 -- # lb_nested_guid=d036b0e6-59d3-452b-b254-6d8bc7644c5a 00:26:44.070 08:19:14 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:44.070 08:19:14 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:26:44.070 08:19:14 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 d036b0e6-59d3-452b-b254-6d8bc7644c5a 00:26:44.331 08:19:14 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:44.592 08:19:15 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:26:44.592 08:19:15 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:26:44.592 08:19:15 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:44.592 08:19:15 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:44.592 08:19:15 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:44.592 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.901 Initializing NVMe Controllers 00:26:56.901 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:56.901 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:56.901 Initialization complete. Launching workers. 00:26:56.901 ======================================================== 00:26:56.901 Latency(us) 00:26:56.901 Device Information : IOPS MiB/s Average min max 00:26:56.901 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.80 0.02 21451.54 115.74 45818.62 00:26:56.901 ======================================================== 00:26:56.901 Total : 46.80 0.02 21451.54 115.74 45818.62 00:26:56.901 00:26:56.901 08:19:25 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:56.901 08:19:25 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:56.901 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.934 Initializing NVMe Controllers 00:27:06.934 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:06.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:06.934 Initialization complete. Launching workers. 00:27:06.934 ======================================================== 00:27:06.934 Latency(us) 00:27:06.934 Device Information : IOPS MiB/s Average min max 00:27:06.934 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 54.80 6.85 18261.54 7973.91 55870.06 00:27:06.934 ======================================================== 00:27:06.934 Total : 54.80 6.85 18261.54 7973.91 55870.06 00:27:06.934 00:27:06.934 08:19:35 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:06.934 08:19:35 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:06.934 08:19:35 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:06.934 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.938 Initializing NVMe Controllers 00:27:16.938 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:16.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:16.938 Initialization complete. Launching workers. 00:27:16.938 ======================================================== 00:27:16.938 Latency(us) 00:27:16.938 Device Information : IOPS MiB/s Average min max 00:27:16.938 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8773.55 4.28 3647.02 260.45 10243.85 00:27:16.938 ======================================================== 00:27:16.938 Total : 8773.55 4.28 3647.02 260.45 10243.85 00:27:16.938 00:27:16.938 08:19:46 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:16.938 08:19:46 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:16.938 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.948 Initializing NVMe Controllers 00:27:26.948 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:26.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:26.948 Initialization complete. Launching workers. 00:27:26.948 ======================================================== 00:27:26.949 Latency(us) 00:27:26.949 Device Information : IOPS MiB/s Average min max 00:27:26.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4081.15 510.14 7840.57 526.04 23062.23 00:27:26.949 ======================================================== 00:27:26.949 Total : 4081.15 510.14 7840.57 526.04 23062.23 00:27:26.949 00:27:26.949 08:19:56 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:26.949 08:19:56 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:26.949 08:19:56 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:26.949 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.953 Initializing NVMe Controllers 00:27:36.953 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:36.953 Controller IO queue size 128, less than required. 00:27:36.953 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:36.953 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:36.953 Initialization complete. Launching workers. 00:27:36.953 ======================================================== 00:27:36.953 Latency(us) 00:27:36.953 Device Information : IOPS MiB/s Average min max 00:27:36.953 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15843.88 7.74 8078.71 1886.97 22608.68 00:27:36.953 ======================================================== 00:27:36.953 Total : 15843.88 7.74 8078.71 1886.97 22608.68 00:27:36.953 00:27:36.953 08:20:06 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:36.953 08:20:06 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:36.953 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.956 Initializing NVMe Controllers 00:27:46.956 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:46.956 Controller IO queue size 128, less than required. 00:27:46.956 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:46.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:46.956 Initialization complete. Launching workers. 00:27:46.956 ======================================================== 00:27:46.956 Latency(us) 00:27:46.956 Device Information : IOPS MiB/s Average min max 00:27:46.956 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1190.87 148.86 108003.20 15141.06 246762.21 00:27:46.956 ======================================================== 00:27:46.956 Total : 1190.87 148.86 108003.20 15141.06 246762.21 00:27:46.956 00:27:46.956 08:20:17 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:46.956 08:20:17 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d036b0e6-59d3-452b-b254-6d8bc7644c5a 00:27:48.342 08:20:18 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:48.603 08:20:19 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3640e2e7-04ab-4ed0-88ba-e4c944b37906 00:27:48.603 08:20:19 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:48.864 08:20:19 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:48.864 08:20:19 -- host/perf.sh@114 -- # nvmftestfini 00:27:48.864 08:20:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:48.864 08:20:19 -- nvmf/common.sh@116 -- # sync 00:27:48.864 08:20:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:48.864 08:20:19 -- nvmf/common.sh@119 -- # set +e 00:27:48.864 08:20:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:48.864 08:20:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:48.864 rmmod nvme_tcp 00:27:48.864 rmmod nvme_fabrics 00:27:48.864 rmmod nvme_keyring 00:27:48.864 08:20:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:48.864 08:20:19 -- nvmf/common.sh@123 -- # set -e 00:27:48.864 08:20:19 -- nvmf/common.sh@124 -- # return 0 00:27:48.864 08:20:19 -- nvmf/common.sh@477 -- # '[' -n 1186967 ']' 00:27:48.864 08:20:19 -- nvmf/common.sh@478 -- # killprocess 1186967 00:27:48.864 08:20:19 -- common/autotest_common.sh@926 -- # '[' -z 1186967 ']' 00:27:48.864 08:20:19 -- common/autotest_common.sh@930 -- # kill -0 1186967 00:27:48.864 08:20:19 -- common/autotest_common.sh@931 -- # uname 00:27:48.864 08:20:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:48.864 08:20:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1186967 00:27:49.125 08:20:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:49.125 08:20:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:49.125 08:20:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1186967' 00:27:49.125 killing process with pid 1186967 00:27:49.125 08:20:19 -- common/autotest_common.sh@945 -- # kill 1186967 00:27:49.125 08:20:19 -- common/autotest_common.sh@950 -- # wait 1186967 00:27:51.040 08:20:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:51.040 08:20:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:51.040 08:20:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:51.040 08:20:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:51.040 08:20:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:51.040 08:20:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.040 08:20:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:51.040 08:20:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.963 08:20:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:52.963 00:27:52.963 real 1m32.030s 00:27:52.963 user 5m25.967s 00:27:52.963 sys 0m14.404s 00:27:52.963 08:20:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.963 08:20:23 -- common/autotest_common.sh@10 -- # set +x 00:27:52.963 ************************************ 00:27:52.963 END TEST nvmf_perf 00:27:52.963 ************************************ 00:27:52.963 08:20:23 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:52.963 08:20:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:52.963 08:20:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:52.963 08:20:23 -- common/autotest_common.sh@10 -- # set +x 00:27:53.224 ************************************ 00:27:53.224 START TEST nvmf_fio_host 00:27:53.224 ************************************ 00:27:53.224 08:20:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:53.224 * Looking for test storage... 00:27:53.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:53.224 08:20:23 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:53.224 08:20:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:53.224 08:20:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:53.224 08:20:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:53.224 08:20:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.224 08:20:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.224 08:20:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.224 08:20:23 -- paths/export.sh@5 -- # export PATH 00:27:53.224 08:20:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.224 08:20:23 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:53.224 08:20:23 -- nvmf/common.sh@7 -- # uname -s 00:27:53.224 08:20:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:53.224 08:20:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:53.224 08:20:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:53.224 08:20:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:53.224 08:20:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:53.224 08:20:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:53.224 08:20:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:53.224 08:20:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:53.224 08:20:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:53.224 08:20:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:53.224 08:20:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:53.224 08:20:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:53.224 08:20:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:53.224 08:20:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:53.224 08:20:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:53.224 08:20:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:53.224 08:20:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:53.224 08:20:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:53.224 08:20:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:53.224 08:20:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.224 08:20:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.224 08:20:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.224 08:20:23 -- paths/export.sh@5 -- # export PATH 00:27:53.224 08:20:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.224 08:20:23 -- nvmf/common.sh@46 -- # : 0 00:27:53.224 08:20:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:53.224 08:20:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:53.224 08:20:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:53.224 08:20:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:53.224 08:20:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:53.224 08:20:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:53.224 08:20:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:53.224 08:20:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:53.224 08:20:23 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:53.224 08:20:23 -- host/fio.sh@14 -- # nvmftestinit 00:27:53.224 08:20:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:53.224 08:20:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:53.224 08:20:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:53.224 08:20:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:53.224 08:20:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:53.224 08:20:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.224 08:20:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:53.224 08:20:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.224 08:20:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:53.224 08:20:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:53.224 08:20:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:53.224 08:20:23 -- common/autotest_common.sh@10 -- # set +x 00:28:01.365 08:20:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:01.365 08:20:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:01.365 08:20:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:01.365 08:20:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:01.365 08:20:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:01.365 08:20:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:01.365 08:20:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:01.365 08:20:30 -- nvmf/common.sh@294 -- # net_devs=() 00:28:01.365 08:20:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:01.365 08:20:30 -- nvmf/common.sh@295 -- # e810=() 00:28:01.365 08:20:30 -- nvmf/common.sh@295 -- # local -ga e810 00:28:01.365 08:20:30 -- nvmf/common.sh@296 -- # x722=() 00:28:01.365 08:20:30 -- nvmf/common.sh@296 -- # local -ga x722 00:28:01.365 08:20:30 -- nvmf/common.sh@297 -- # mlx=() 00:28:01.365 08:20:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:01.365 08:20:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:01.365 08:20:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:01.365 08:20:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:01.365 08:20:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:01.365 08:20:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:01.365 08:20:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:01.365 08:20:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:01.365 08:20:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:01.365 08:20:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:01.365 08:20:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:01.365 08:20:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:01.365 08:20:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:01.365 08:20:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:01.365 08:20:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:01.365 08:20:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:01.365 08:20:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:01.365 08:20:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:01.365 08:20:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:01.365 08:20:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:01.365 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:01.365 08:20:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:01.365 08:20:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:01.365 08:20:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.365 08:20:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.365 08:20:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:01.365 08:20:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:01.365 08:20:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:01.365 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:01.365 08:20:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:01.365 08:20:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:01.365 08:20:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.365 08:20:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.365 08:20:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:01.365 08:20:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:01.365 08:20:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:01.365 08:20:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:01.365 08:20:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:01.365 08:20:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.365 08:20:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:01.365 08:20:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.365 08:20:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:01.365 Found net devices under 0000:31:00.0: cvl_0_0 00:28:01.365 08:20:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.365 08:20:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:01.365 08:20:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.365 08:20:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:01.365 08:20:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.365 08:20:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:01.365 Found net devices under 0000:31:00.1: cvl_0_1 00:28:01.365 08:20:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.365 08:20:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:01.365 08:20:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:01.365 08:20:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:01.365 08:20:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:01.365 08:20:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:01.365 08:20:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:01.365 08:20:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:01.365 08:20:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:01.365 08:20:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:01.365 08:20:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:01.365 08:20:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:01.365 08:20:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:01.365 08:20:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:01.365 08:20:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:01.365 08:20:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:01.365 08:20:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:01.365 08:20:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:01.365 08:20:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:01.365 08:20:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:01.365 08:20:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:01.365 08:20:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:01.365 08:20:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:01.365 08:20:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:01.365 08:20:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:01.365 08:20:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:01.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:01.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:28:01.365 00:28:01.365 --- 10.0.0.2 ping statistics --- 00:28:01.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.365 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:28:01.365 08:20:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:01.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:01.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:28:01.365 00:28:01.365 --- 10.0.0.1 ping statistics --- 00:28:01.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.365 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:28:01.365 08:20:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:01.365 08:20:30 -- nvmf/common.sh@410 -- # return 0 00:28:01.365 08:20:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:01.365 08:20:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:01.365 08:20:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:01.365 08:20:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:01.365 08:20:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:01.365 08:20:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:01.365 08:20:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:01.365 08:20:30 -- host/fio.sh@16 -- # [[ y != y ]] 00:28:01.365 08:20:30 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:01.365 08:20:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:01.365 08:20:30 -- common/autotest_common.sh@10 -- # set +x 00:28:01.365 08:20:30 -- host/fio.sh@24 -- # nvmfpid=1207316 00:28:01.365 08:20:30 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:01.365 08:20:30 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:01.365 08:20:30 -- host/fio.sh@28 -- # waitforlisten 1207316 00:28:01.365 08:20:30 -- common/autotest_common.sh@819 -- # '[' -z 1207316 ']' 00:28:01.365 08:20:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.365 08:20:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:01.365 08:20:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.365 08:20:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:01.365 08:20:30 -- common/autotest_common.sh@10 -- # set +x 00:28:01.365 [2024-06-11 08:20:30.976415] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:01.365 [2024-06-11 08:20:30.976487] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.365 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.365 [2024-06-11 08:20:31.048721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:01.365 [2024-06-11 08:20:31.122101] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:01.365 [2024-06-11 08:20:31.122237] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.365 [2024-06-11 08:20:31.122247] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.365 [2024-06-11 08:20:31.122255] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.365 [2024-06-11 08:20:31.122486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.365 [2024-06-11 08:20:31.122714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.365 [2024-06-11 08:20:31.122715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:01.365 [2024-06-11 08:20:31.122548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:01.365 08:20:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:01.365 08:20:31 -- common/autotest_common.sh@852 -- # return 0 00:28:01.365 08:20:31 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:01.365 [2024-06-11 08:20:31.890876] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.365 08:20:31 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:01.365 08:20:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:01.365 08:20:31 -- common/autotest_common.sh@10 -- # set +x 00:28:01.365 08:20:31 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:01.625 Malloc1 00:28:01.625 08:20:32 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:01.885 08:20:32 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:01.885 08:20:32 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:02.145 [2024-06-11 08:20:32.592466] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.145 08:20:32 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:02.145 08:20:32 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:02.145 08:20:32 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:02.145 08:20:32 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:02.145 08:20:32 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:02.145 08:20:32 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:02.145 08:20:32 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:02.145 08:20:32 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:02.145 08:20:32 -- common/autotest_common.sh@1320 -- # shift 00:28:02.145 08:20:32 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:02.145 08:20:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:02.145 08:20:32 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:02.145 08:20:32 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:02.145 08:20:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:02.428 08:20:32 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:02.428 08:20:32 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:02.428 08:20:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:02.428 08:20:32 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:02.428 08:20:32 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:02.428 08:20:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:02.428 08:20:32 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:02.428 08:20:32 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:02.428 08:20:32 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:02.428 08:20:32 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:02.695 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:02.695 fio-3.35 00:28:02.695 Starting 1 thread 00:28:02.695 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.268 00:28:05.268 test: (groupid=0, jobs=1): err= 0: pid=1207994: Tue Jun 11 08:20:35 2024 00:28:05.268 read: IOPS=14.1k, BW=55.0MiB/s (57.7MB/s)(110MiB/2004msec) 00:28:05.268 slat (usec): min=2, max=274, avg= 2.16, stdev= 2.30 00:28:05.268 clat (usec): min=3584, max=8531, avg=4987.88, stdev=735.75 00:28:05.268 lat (usec): min=3587, max=8533, avg=4990.04, stdev=735.85 00:28:05.268 clat percentiles (usec): 00:28:05.268 | 1.00th=[ 4015], 5.00th=[ 4293], 10.00th=[ 4359], 20.00th=[ 4555], 00:28:05.268 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 4883], 00:28:05.268 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 6259], 95.00th=[ 6915], 00:28:05.268 | 99.00th=[ 7439], 99.50th=[ 7635], 99.90th=[ 7898], 99.95th=[ 7963], 00:28:05.268 | 99.99th=[ 8029] 00:28:05.268 bw ( KiB/s): min=46720, max=59616, per=99.98%, avg=56338.00, stdev=6412.30, samples=4 00:28:05.268 iops : min=11680, max=14904, avg=14084.50, stdev=1603.08, samples=4 00:28:05.268 write: IOPS=14.1k, BW=55.1MiB/s (57.7MB/s)(110MiB/2004msec); 0 zone resets 00:28:05.268 slat (usec): min=2, max=259, avg= 2.25, stdev= 1.71 00:28:05.268 clat (usec): min=2864, max=6757, avg=4039.65, stdev=606.37 00:28:05.268 lat (usec): min=2869, max=7016, avg=4041.90, stdev=606.50 00:28:05.268 clat percentiles (usec): 00:28:05.268 | 1.00th=[ 3261], 5.00th=[ 3425], 10.00th=[ 3556], 20.00th=[ 3654], 00:28:05.268 | 30.00th=[ 3752], 40.00th=[ 3818], 50.00th=[ 3884], 60.00th=[ 3949], 00:28:05.268 | 70.00th=[ 4047], 80.00th=[ 4178], 90.00th=[ 5145], 95.00th=[ 5604], 00:28:05.268 | 99.00th=[ 6063], 99.50th=[ 6194], 99.90th=[ 6456], 99.95th=[ 6587], 00:28:05.268 | 99.99th=[ 6652] 00:28:05.268 bw ( KiB/s): min=47240, max=59576, per=99.97%, avg=56370.00, stdev=6087.95, samples=4 00:28:05.268 iops : min=11810, max=14894, avg=14092.50, stdev=1521.99, samples=4 00:28:05.268 lat (msec) : 4=32.75%, 10=67.25% 00:28:05.268 cpu : usr=76.29%, sys=22.37%, ctx=20, majf=0, minf=5 00:28:05.268 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:05.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:05.268 issued rwts: total=28230,28251,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:05.268 00:28:05.268 Run status group 0 (all jobs): 00:28:05.268 READ: bw=55.0MiB/s (57.7MB/s), 55.0MiB/s-55.0MiB/s (57.7MB/s-57.7MB/s), io=110MiB (116MB), run=2004-2004msec 00:28:05.268 WRITE: bw=55.1MiB/s (57.7MB/s), 55.1MiB/s-55.1MiB/s (57.7MB/s-57.7MB/s), io=110MiB (116MB), run=2004-2004msec 00:28:05.268 08:20:35 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:05.268 08:20:35 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:05.268 08:20:35 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:05.268 08:20:35 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:05.268 08:20:35 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:05.268 08:20:35 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:05.268 08:20:35 -- common/autotest_common.sh@1320 -- # shift 00:28:05.268 08:20:35 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:05.268 08:20:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:05.268 08:20:35 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:05.268 08:20:35 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:05.268 08:20:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:05.268 08:20:35 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:05.268 08:20:35 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:05.268 08:20:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:05.268 08:20:35 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:05.268 08:20:35 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:05.268 08:20:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:05.268 08:20:35 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:05.268 08:20:35 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:05.268 08:20:35 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:05.268 08:20:35 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:05.532 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:05.532 fio-3.35 00:28:05.532 Starting 1 thread 00:28:05.532 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.076 00:28:08.076 test: (groupid=0, jobs=1): err= 0: pid=1208687: Tue Jun 11 08:20:38 2024 00:28:08.076 read: IOPS=9738, BW=152MiB/s (160MB/s)(305MiB/2005msec) 00:28:08.076 slat (usec): min=3, max=113, avg= 3.66, stdev= 1.78 00:28:08.076 clat (usec): min=1176, max=14459, avg=7891.91, stdev=1913.30 00:28:08.076 lat (usec): min=1180, max=14463, avg=7895.57, stdev=1913.50 00:28:08.076 clat percentiles (usec): 00:28:08.076 | 1.00th=[ 4080], 5.00th=[ 4948], 10.00th=[ 5473], 20.00th=[ 6194], 00:28:08.076 | 30.00th=[ 6718], 40.00th=[ 7242], 50.00th=[ 7832], 60.00th=[ 8356], 00:28:08.076 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10290], 95.00th=[10945], 00:28:08.076 | 99.00th=[12387], 99.50th=[12780], 99.90th=[13566], 99.95th=[13960], 00:28:08.076 | 99.99th=[14222] 00:28:08.076 bw ( KiB/s): min=72160, max=83488, per=49.30%, avg=76824.00, stdev=4777.79, samples=4 00:28:08.076 iops : min= 4510, max= 5218, avg=4801.50, stdev=298.61, samples=4 00:28:08.076 write: IOPS=5619, BW=87.8MiB/s (92.1MB/s)(157MiB/1785msec); 0 zone resets 00:28:08.076 slat (usec): min=39, max=442, avg=41.16, stdev= 8.33 00:28:08.077 clat (usec): min=2198, max=16828, avg=9133.55, stdev=1508.88 00:28:08.077 lat (usec): min=2239, max=16868, avg=9174.71, stdev=1510.94 00:28:08.077 clat percentiles (usec): 00:28:08.077 | 1.00th=[ 6063], 5.00th=[ 6980], 10.00th=[ 7439], 20.00th=[ 7963], 00:28:08.077 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9372], 00:28:08.077 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[11076], 95.00th=[11863], 00:28:08.077 | 99.00th=[13829], 99.50th=[14484], 99.90th=[15270], 99.95th=[15401], 00:28:08.077 | 99.99th=[16712] 00:28:08.077 bw ( KiB/s): min=75200, max=87040, per=88.76%, avg=79800.00, stdev=5084.43, samples=4 00:28:08.077 iops : min= 4700, max= 5440, avg=4987.50, stdev=317.78, samples=4 00:28:08.077 lat (msec) : 2=0.04%, 4=0.58%, 10=82.01%, 20=17.37% 00:28:08.077 cpu : usr=87.33%, sys=11.53%, ctx=15, majf=0, minf=9 00:28:08.077 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:28:08.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:08.077 issued rwts: total=19526,10030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.077 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:08.077 00:28:08.077 Run status group 0 (all jobs): 00:28:08.077 READ: bw=152MiB/s (160MB/s), 152MiB/s-152MiB/s (160MB/s-160MB/s), io=305MiB (320MB), run=2005-2005msec 00:28:08.077 WRITE: bw=87.8MiB/s (92.1MB/s), 87.8MiB/s-87.8MiB/s (92.1MB/s-92.1MB/s), io=157MiB (164MB), run=1785-1785msec 00:28:08.077 08:20:38 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:08.077 08:20:38 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:28:08.077 08:20:38 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:28:08.077 08:20:38 -- host/fio.sh@51 -- # get_nvme_bdfs 00:28:08.077 08:20:38 -- common/autotest_common.sh@1498 -- # bdfs=() 00:28:08.077 08:20:38 -- common/autotest_common.sh@1498 -- # local bdfs 00:28:08.077 08:20:38 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:08.077 08:20:38 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:08.077 08:20:38 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:28:08.077 08:20:38 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:28:08.077 08:20:38 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:28:08.077 08:20:38 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:28:08.684 Nvme0n1 00:28:08.684 08:20:39 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:28:09.284 08:20:39 -- host/fio.sh@53 -- # ls_guid=a658be07-c0f3-4965-9a73-a33759a2b4a3 00:28:09.285 08:20:39 -- host/fio.sh@54 -- # get_lvs_free_mb a658be07-c0f3-4965-9a73-a33759a2b4a3 00:28:09.285 08:20:39 -- common/autotest_common.sh@1343 -- # local lvs_uuid=a658be07-c0f3-4965-9a73-a33759a2b4a3 00:28:09.285 08:20:39 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:09.285 08:20:39 -- common/autotest_common.sh@1345 -- # local fc 00:28:09.285 08:20:39 -- common/autotest_common.sh@1346 -- # local cs 00:28:09.285 08:20:39 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:09.285 08:20:39 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:09.285 { 00:28:09.285 "uuid": "a658be07-c0f3-4965-9a73-a33759a2b4a3", 00:28:09.285 "name": "lvs_0", 00:28:09.285 "base_bdev": "Nvme0n1", 00:28:09.285 "total_data_clusters": 1787, 00:28:09.285 "free_clusters": 1787, 00:28:09.285 "block_size": 512, 00:28:09.285 "cluster_size": 1073741824 00:28:09.285 } 00:28:09.285 ]' 00:28:09.285 08:20:39 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="a658be07-c0f3-4965-9a73-a33759a2b4a3") .free_clusters' 00:28:09.285 08:20:39 -- common/autotest_common.sh@1348 -- # fc=1787 00:28:09.285 08:20:39 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="a658be07-c0f3-4965-9a73-a33759a2b4a3") .cluster_size' 00:28:09.285 08:20:39 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:28:09.285 08:20:39 -- common/autotest_common.sh@1352 -- # free_mb=1829888 00:28:09.285 08:20:39 -- common/autotest_common.sh@1353 -- # echo 1829888 00:28:09.285 1829888 00:28:09.285 08:20:39 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:28:09.545 276aa6d9-7cae-44d3-aac5-a0ea7fc9f52f 00:28:09.545 08:20:40 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:28:09.805 08:20:40 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:28:09.805 08:20:40 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:10.066 08:20:40 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:10.066 08:20:40 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:10.066 08:20:40 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:10.066 08:20:40 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:10.066 08:20:40 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:10.066 08:20:40 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:10.066 08:20:40 -- common/autotest_common.sh@1320 -- # shift 00:28:10.066 08:20:40 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:10.066 08:20:40 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:10.066 08:20:40 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:10.066 08:20:40 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:10.066 08:20:40 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:10.066 08:20:40 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:10.066 08:20:40 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:10.066 08:20:40 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:10.066 08:20:40 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:10.066 08:20:40 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:10.066 08:20:40 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:10.066 08:20:40 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:10.066 08:20:40 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:10.066 08:20:40 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:10.066 08:20:40 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:10.331 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:10.331 fio-3.35 00:28:10.331 Starting 1 thread 00:28:10.331 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.875 00:28:12.875 test: (groupid=0, jobs=1): err= 0: pid=1209914: Tue Jun 11 08:20:43 2024 00:28:12.875 read: IOPS=11.0k, BW=42.9MiB/s (45.0MB/s)(85.9MiB/2004msec) 00:28:12.875 slat (nsec): min=2052, max=107980, avg=2211.47, stdev=977.97 00:28:12.875 clat (usec): min=1913, max=10515, avg=6433.70, stdev=488.84 00:28:12.875 lat (usec): min=1930, max=10517, avg=6435.91, stdev=488.80 00:28:12.875 clat percentiles (usec): 00:28:12.875 | 1.00th=[ 5342], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6063], 00:28:12.875 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6521], 00:28:12.875 | 70.00th=[ 6652], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7177], 00:28:12.875 | 99.00th=[ 7570], 99.50th=[ 7701], 99.90th=[ 9110], 99.95th=[ 9896], 00:28:12.875 | 99.99th=[10421] 00:28:12.875 bw ( KiB/s): min=42560, max=44456, per=99.83%, avg=43832.00, stdev=860.21, samples=4 00:28:12.875 iops : min=10640, max=11114, avg=10958.00, stdev=215.05, samples=4 00:28:12.875 write: IOPS=10.9k, BW=42.8MiB/s (44.8MB/s)(85.7MiB/2004msec); 0 zone resets 00:28:12.876 slat (nsec): min=2117, max=95713, avg=2298.89, stdev=690.51 00:28:12.876 clat (usec): min=1047, max=9141, avg=5146.81, stdev=413.61 00:28:12.876 lat (usec): min=1054, max=9143, avg=5149.11, stdev=413.59 00:28:12.876 clat percentiles (usec): 00:28:12.876 | 1.00th=[ 4178], 5.00th=[ 4490], 10.00th=[ 4621], 20.00th=[ 4817], 00:28:12.876 | 30.00th=[ 4948], 40.00th=[ 5080], 50.00th=[ 5145], 60.00th=[ 5276], 00:28:12.876 | 70.00th=[ 5342], 80.00th=[ 5473], 90.00th=[ 5669], 95.00th=[ 5800], 00:28:12.876 | 99.00th=[ 6063], 99.50th=[ 6194], 99.90th=[ 6783], 99.95th=[ 7701], 00:28:12.876 | 99.99th=[ 8979] 00:28:12.876 bw ( KiB/s): min=42896, max=44296, per=99.99%, avg=43780.00, stdev=609.77, samples=4 00:28:12.876 iops : min=10724, max=11074, avg=10945.00, stdev=152.44, samples=4 00:28:12.876 lat (msec) : 2=0.02%, 4=0.15%, 10=99.80%, 20=0.02% 00:28:12.876 cpu : usr=71.99%, sys=26.71%, ctx=28, majf=0, minf=6 00:28:12.876 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:12.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:12.876 issued rwts: total=21998,21935,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.876 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:12.876 00:28:12.876 Run status group 0 (all jobs): 00:28:12.876 READ: bw=42.9MiB/s (45.0MB/s), 42.9MiB/s-42.9MiB/s (45.0MB/s-45.0MB/s), io=85.9MiB (90.1MB), run=2004-2004msec 00:28:12.876 WRITE: bw=42.8MiB/s (44.8MB/s), 42.8MiB/s-42.8MiB/s (44.8MB/s-44.8MB/s), io=85.7MiB (89.8MB), run=2004-2004msec 00:28:12.876 08:20:43 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:12.876 08:20:43 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:28:13.820 08:20:44 -- host/fio.sh@64 -- # ls_nested_guid=c425ca67-0800-4418-8b4a-e2651cb6c46a 00:28:13.820 08:20:44 -- host/fio.sh@65 -- # get_lvs_free_mb c425ca67-0800-4418-8b4a-e2651cb6c46a 00:28:13.820 08:20:44 -- common/autotest_common.sh@1343 -- # local lvs_uuid=c425ca67-0800-4418-8b4a-e2651cb6c46a 00:28:13.820 08:20:44 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:13.820 08:20:44 -- common/autotest_common.sh@1345 -- # local fc 00:28:13.820 08:20:44 -- common/autotest_common.sh@1346 -- # local cs 00:28:13.820 08:20:44 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:13.820 08:20:44 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:13.820 { 00:28:13.820 "uuid": "a658be07-c0f3-4965-9a73-a33759a2b4a3", 00:28:13.820 "name": "lvs_0", 00:28:13.820 "base_bdev": "Nvme0n1", 00:28:13.820 "total_data_clusters": 1787, 00:28:13.820 "free_clusters": 0, 00:28:13.820 "block_size": 512, 00:28:13.820 "cluster_size": 1073741824 00:28:13.820 }, 00:28:13.820 { 00:28:13.820 "uuid": "c425ca67-0800-4418-8b4a-e2651cb6c46a", 00:28:13.820 "name": "lvs_n_0", 00:28:13.820 "base_bdev": "276aa6d9-7cae-44d3-aac5-a0ea7fc9f52f", 00:28:13.820 "total_data_clusters": 457025, 00:28:13.820 "free_clusters": 457025, 00:28:13.820 "block_size": 512, 00:28:13.820 "cluster_size": 4194304 00:28:13.820 } 00:28:13.820 ]' 00:28:13.820 08:20:44 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="c425ca67-0800-4418-8b4a-e2651cb6c46a") .free_clusters' 00:28:13.820 08:20:44 -- common/autotest_common.sh@1348 -- # fc=457025 00:28:13.820 08:20:44 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="c425ca67-0800-4418-8b4a-e2651cb6c46a") .cluster_size' 00:28:14.080 08:20:44 -- common/autotest_common.sh@1349 -- # cs=4194304 00:28:14.080 08:20:44 -- common/autotest_common.sh@1352 -- # free_mb=1828100 00:28:14.080 08:20:44 -- common/autotest_common.sh@1353 -- # echo 1828100 00:28:14.080 1828100 00:28:14.080 08:20:44 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:28:15.021 72a53391-9c8e-40d8-be1f-c66d40ec457e 00:28:15.021 08:20:45 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:28:15.281 08:20:45 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:28:15.281 08:20:45 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:28:15.542 08:20:45 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:15.542 08:20:45 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:15.542 08:20:45 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:15.542 08:20:45 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:15.542 08:20:45 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:15.542 08:20:45 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:15.542 08:20:45 -- common/autotest_common.sh@1320 -- # shift 00:28:15.542 08:20:45 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:15.542 08:20:45 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:15.542 08:20:45 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:15.542 08:20:45 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:15.542 08:20:45 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:15.542 08:20:46 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:15.542 08:20:46 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:15.542 08:20:46 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:15.542 08:20:46 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:15.542 08:20:46 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:15.542 08:20:46 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:15.542 08:20:46 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:15.542 08:20:46 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:15.542 08:20:46 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:15.542 08:20:46 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:15.802 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:15.802 fio-3.35 00:28:15.802 Starting 1 thread 00:28:15.802 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.345 00:28:18.345 test: (groupid=0, jobs=1): err= 0: pid=1211108: Tue Jun 11 08:20:48 2024 00:28:18.345 read: IOPS=9709, BW=37.9MiB/s (39.8MB/s)(76.1MiB/2006msec) 00:28:18.345 slat (usec): min=2, max=111, avg= 2.18, stdev= 1.12 00:28:18.345 clat (usec): min=2069, max=11817, avg=7273.51, stdev=560.70 00:28:18.345 lat (usec): min=2085, max=11819, avg=7275.69, stdev=560.64 00:28:18.345 clat percentiles (usec): 00:28:18.345 | 1.00th=[ 5997], 5.00th=[ 6390], 10.00th=[ 6587], 20.00th=[ 6849], 00:28:18.345 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7439], 00:28:18.345 | 70.00th=[ 7570], 80.00th=[ 7701], 90.00th=[ 7963], 95.00th=[ 8160], 00:28:18.345 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[ 9503], 99.95th=[10683], 00:28:18.345 | 99.99th=[11731] 00:28:18.345 bw ( KiB/s): min=37504, max=39584, per=99.92%, avg=38808.00, stdev=924.66, samples=4 00:28:18.345 iops : min= 9376, max= 9896, avg=9702.00, stdev=231.17, samples=4 00:28:18.345 write: IOPS=9714, BW=37.9MiB/s (39.8MB/s)(76.1MiB/2006msec); 0 zone resets 00:28:18.345 slat (nsec): min=2140, max=94347, avg=2272.04, stdev=716.11 00:28:18.345 clat (usec): min=1062, max=10916, avg=5802.89, stdev=491.53 00:28:18.345 lat (usec): min=1070, max=10919, avg=5805.16, stdev=491.51 00:28:18.345 clat percentiles (usec): 00:28:18.345 | 1.00th=[ 4686], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5407], 00:28:18.345 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5932], 00:28:18.345 | 70.00th=[ 6063], 80.00th=[ 6194], 90.00th=[ 6390], 95.00th=[ 6521], 00:28:18.345 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 8717], 99.95th=[10159], 00:28:18.345 | 99.99th=[10945] 00:28:18.345 bw ( KiB/s): min=38224, max=39424, per=100.00%, avg=38868.00, stdev=495.29, samples=4 00:28:18.345 iops : min= 9556, max= 9856, avg=9717.00, stdev=123.82, samples=4 00:28:18.345 lat (msec) : 2=0.01%, 4=0.12%, 10=99.81%, 20=0.07% 00:28:18.345 cpu : usr=73.32%, sys=25.64%, ctx=38, majf=0, minf=6 00:28:18.345 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:18.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:18.345 issued rwts: total=19477,19488,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.345 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.345 00:28:18.345 Run status group 0 (all jobs): 00:28:18.345 READ: bw=37.9MiB/s (39.8MB/s), 37.9MiB/s-37.9MiB/s (39.8MB/s-39.8MB/s), io=76.1MiB (79.8MB), run=2006-2006msec 00:28:18.345 WRITE: bw=37.9MiB/s (39.8MB/s), 37.9MiB/s-37.9MiB/s (39.8MB/s-39.8MB/s), io=76.1MiB (79.8MB), run=2006-2006msec 00:28:18.345 08:20:48 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:18.345 08:20:48 -- host/fio.sh@74 -- # sync 00:28:18.345 08:20:48 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:28:20.257 08:20:50 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:20.517 08:20:51 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:28:21.088 08:20:51 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:21.348 08:20:51 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:28:23.260 08:20:53 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:23.260 08:20:53 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:28:23.260 08:20:53 -- host/fio.sh@86 -- # nvmftestfini 00:28:23.260 08:20:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:23.260 08:20:53 -- nvmf/common.sh@116 -- # sync 00:28:23.260 08:20:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:23.260 08:20:53 -- nvmf/common.sh@119 -- # set +e 00:28:23.260 08:20:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:23.260 08:20:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:23.260 rmmod nvme_tcp 00:28:23.260 rmmod nvme_fabrics 00:28:23.260 rmmod nvme_keyring 00:28:23.260 08:20:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:23.260 08:20:53 -- nvmf/common.sh@123 -- # set -e 00:28:23.260 08:20:53 -- nvmf/common.sh@124 -- # return 0 00:28:23.260 08:20:53 -- nvmf/common.sh@477 -- # '[' -n 1207316 ']' 00:28:23.260 08:20:53 -- nvmf/common.sh@478 -- # killprocess 1207316 00:28:23.260 08:20:53 -- common/autotest_common.sh@926 -- # '[' -z 1207316 ']' 00:28:23.260 08:20:53 -- common/autotest_common.sh@930 -- # kill -0 1207316 00:28:23.260 08:20:53 -- common/autotest_common.sh@931 -- # uname 00:28:23.260 08:20:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:23.260 08:20:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1207316 00:28:23.260 08:20:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:23.260 08:20:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:23.260 08:20:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1207316' 00:28:23.260 killing process with pid 1207316 00:28:23.260 08:20:53 -- common/autotest_common.sh@945 -- # kill 1207316 00:28:23.260 08:20:53 -- common/autotest_common.sh@950 -- # wait 1207316 00:28:23.521 08:20:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:23.521 08:20:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:23.521 08:20:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:23.521 08:20:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:23.521 08:20:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:23.521 08:20:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.521 08:20:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:23.521 08:20:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.435 08:20:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:25.435 00:28:25.435 real 0m32.464s 00:28:25.435 user 2m44.834s 00:28:25.435 sys 0m9.411s 00:28:25.435 08:20:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:25.435 08:20:56 -- common/autotest_common.sh@10 -- # set +x 00:28:25.435 ************************************ 00:28:25.435 END TEST nvmf_fio_host 00:28:25.435 ************************************ 00:28:25.697 08:20:56 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:25.697 08:20:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:25.697 08:20:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:25.697 08:20:56 -- common/autotest_common.sh@10 -- # set +x 00:28:25.697 ************************************ 00:28:25.697 START TEST nvmf_failover 00:28:25.697 ************************************ 00:28:25.697 08:20:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:25.697 * Looking for test storage... 00:28:25.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:25.697 08:20:56 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:25.697 08:20:56 -- nvmf/common.sh@7 -- # uname -s 00:28:25.697 08:20:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.697 08:20:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.697 08:20:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.697 08:20:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.697 08:20:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.698 08:20:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.698 08:20:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.698 08:20:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.698 08:20:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.698 08:20:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.698 08:20:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:25.698 08:20:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:25.698 08:20:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.698 08:20:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.698 08:20:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:25.698 08:20:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:25.698 08:20:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.698 08:20:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.698 08:20:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.698 08:20:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.698 08:20:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.698 08:20:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.698 08:20:56 -- paths/export.sh@5 -- # export PATH 00:28:25.698 08:20:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.698 08:20:56 -- nvmf/common.sh@46 -- # : 0 00:28:25.698 08:20:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:25.698 08:20:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:25.698 08:20:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:25.698 08:20:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.698 08:20:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.698 08:20:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:25.698 08:20:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:25.698 08:20:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:25.698 08:20:56 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:25.698 08:20:56 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:25.698 08:20:56 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:25.698 08:20:56 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:25.698 08:20:56 -- host/failover.sh@18 -- # nvmftestinit 00:28:25.698 08:20:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:25.698 08:20:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.698 08:20:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:25.698 08:20:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:25.698 08:20:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:25.698 08:20:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.698 08:20:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:25.698 08:20:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.698 08:20:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:25.698 08:20:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:25.698 08:20:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:25.698 08:20:56 -- common/autotest_common.sh@10 -- # set +x 00:28:33.840 08:21:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:33.840 08:21:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:33.840 08:21:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:33.840 08:21:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:33.840 08:21:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:33.840 08:21:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:33.840 08:21:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:33.840 08:21:03 -- nvmf/common.sh@294 -- # net_devs=() 00:28:33.840 08:21:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:33.840 08:21:03 -- nvmf/common.sh@295 -- # e810=() 00:28:33.840 08:21:03 -- nvmf/common.sh@295 -- # local -ga e810 00:28:33.840 08:21:03 -- nvmf/common.sh@296 -- # x722=() 00:28:33.840 08:21:03 -- nvmf/common.sh@296 -- # local -ga x722 00:28:33.840 08:21:03 -- nvmf/common.sh@297 -- # mlx=() 00:28:33.840 08:21:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:33.840 08:21:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.840 08:21:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.840 08:21:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.840 08:21:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.840 08:21:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.840 08:21:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.840 08:21:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.840 08:21:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.840 08:21:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.840 08:21:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.840 08:21:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.840 08:21:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:33.840 08:21:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:33.840 08:21:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:33.840 08:21:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:33.840 08:21:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:33.840 08:21:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:33.840 08:21:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:33.840 08:21:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:33.840 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:33.840 08:21:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:33.840 08:21:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:33.840 08:21:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.840 08:21:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.840 08:21:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:33.840 08:21:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:33.840 08:21:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:33.840 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:33.840 08:21:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:33.840 08:21:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:33.840 08:21:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.840 08:21:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.840 08:21:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:33.840 08:21:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:33.840 08:21:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:33.840 08:21:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:33.840 08:21:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:33.840 08:21:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.840 08:21:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:33.840 08:21:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.840 08:21:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:33.840 Found net devices under 0000:31:00.0: cvl_0_0 00:28:33.840 08:21:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.840 08:21:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:33.840 08:21:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.840 08:21:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:33.840 08:21:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.840 08:21:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:33.840 Found net devices under 0000:31:00.1: cvl_0_1 00:28:33.840 08:21:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.840 08:21:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:33.840 08:21:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:33.840 08:21:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:33.841 08:21:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:33.841 08:21:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:33.841 08:21:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.841 08:21:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.841 08:21:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.841 08:21:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:33.841 08:21:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.841 08:21:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.841 08:21:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:33.841 08:21:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.841 08:21:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.841 08:21:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:33.841 08:21:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:33.841 08:21:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.841 08:21:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.841 08:21:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.841 08:21:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.841 08:21:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:33.841 08:21:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.841 08:21:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.841 08:21:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.841 08:21:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:33.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:28:33.841 00:28:33.841 --- 10.0.0.2 ping statistics --- 00:28:33.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.841 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:28:33.841 08:21:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:28:33.841 00:28:33.841 --- 10.0.0.1 ping statistics --- 00:28:33.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.841 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:28:33.841 08:21:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.841 08:21:03 -- nvmf/common.sh@410 -- # return 0 00:28:33.841 08:21:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:33.841 08:21:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.841 08:21:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:33.841 08:21:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:33.841 08:21:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.841 08:21:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:33.841 08:21:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:33.841 08:21:03 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:28:33.841 08:21:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:33.841 08:21:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:33.841 08:21:03 -- common/autotest_common.sh@10 -- # set +x 00:28:33.841 08:21:03 -- nvmf/common.sh@469 -- # nvmfpid=1216873 00:28:33.841 08:21:03 -- nvmf/common.sh@470 -- # waitforlisten 1216873 00:28:33.841 08:21:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:33.841 08:21:03 -- common/autotest_common.sh@819 -- # '[' -z 1216873 ']' 00:28:33.841 08:21:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.841 08:21:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:33.841 08:21:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.841 08:21:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:33.841 08:21:03 -- common/autotest_common.sh@10 -- # set +x 00:28:33.841 [2024-06-11 08:21:03.636425] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:33.841 [2024-06-11 08:21:03.636495] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.841 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.841 [2024-06-11 08:21:03.727855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:33.841 [2024-06-11 08:21:03.824154] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:33.841 [2024-06-11 08:21:03.824329] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.841 [2024-06-11 08:21:03.824338] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.841 [2024-06-11 08:21:03.824348] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.841 [2024-06-11 08:21:03.824499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:33.841 [2024-06-11 08:21:03.824685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:33.841 [2024-06-11 08:21:03.824686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.841 08:21:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:33.841 08:21:04 -- common/autotest_common.sh@852 -- # return 0 00:28:33.841 08:21:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:33.841 08:21:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:33.841 08:21:04 -- common/autotest_common.sh@10 -- # set +x 00:28:33.841 08:21:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.841 08:21:04 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:34.101 [2024-06-11 08:21:04.591199] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.101 08:21:04 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:34.361 Malloc0 00:28:34.362 08:21:04 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:34.362 08:21:04 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:34.622 08:21:05 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:34.622 [2024-06-11 08:21:05.267007] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:34.882 08:21:05 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:34.882 [2024-06-11 08:21:05.431491] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:34.882 08:21:05 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:35.143 [2024-06-11 08:21:05.596009] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:35.143 08:21:05 -- host/failover.sh@31 -- # bdevperf_pid=1217331 00:28:35.143 08:21:05 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:35.143 08:21:05 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:35.143 08:21:05 -- host/failover.sh@34 -- # waitforlisten 1217331 /var/tmp/bdevperf.sock 00:28:35.143 08:21:05 -- common/autotest_common.sh@819 -- # '[' -z 1217331 ']' 00:28:35.143 08:21:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:35.143 08:21:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:35.143 08:21:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:35.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:35.143 08:21:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:35.143 08:21:05 -- common/autotest_common.sh@10 -- # set +x 00:28:36.085 08:21:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:36.085 08:21:06 -- common/autotest_common.sh@852 -- # return 0 00:28:36.085 08:21:06 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:36.085 NVMe0n1 00:28:36.085 08:21:06 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:36.346 00:28:36.346 08:21:06 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:36.346 08:21:06 -- host/failover.sh@39 -- # run_test_pid=1217538 00:28:36.346 08:21:06 -- host/failover.sh@41 -- # sleep 1 00:28:37.726 08:21:07 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:37.726 [2024-06-11 08:21:08.108482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.726 [2024-06-11 08:21:08.108536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.726 [2024-06-11 08:21:08.108546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.726 [2024-06-11 08:21:08.108550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 [2024-06-11 08:21:08.108769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e47af0 is same with the state(5) to be set 00:28:37.727 08:21:08 -- host/failover.sh@45 -- # sleep 3 00:28:41.028 08:21:11 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:41.028 00:28:41.028 08:21:11 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:41.290 [2024-06-11 08:21:11.685159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.290 [2024-06-11 08:21:11.685387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 [2024-06-11 08:21:11.685579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e491e0 is same with the state(5) to be set 00:28:41.291 08:21:11 -- host/failover.sh@50 -- # sleep 3 00:28:44.592 08:21:14 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:44.592 [2024-06-11 08:21:14.855507] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.592 08:21:14 -- host/failover.sh@55 -- # sleep 1 00:28:45.534 08:21:15 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:45.534 [2024-06-11 08:21:16.024921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.024962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.024967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.024972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.024977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.024982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.024986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.024991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.024995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 [2024-06-11 08:21:16.025139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e498a0 is same with the state(5) to be set 00:28:45.534 08:21:16 -- host/failover.sh@59 -- # wait 1217538 00:28:52.129 0 00:28:52.129 08:21:22 -- host/failover.sh@61 -- # killprocess 1217331 00:28:52.129 08:21:22 -- common/autotest_common.sh@926 -- # '[' -z 1217331 ']' 00:28:52.129 08:21:22 -- common/autotest_common.sh@930 -- # kill -0 1217331 00:28:52.129 08:21:22 -- common/autotest_common.sh@931 -- # uname 00:28:52.129 08:21:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:52.129 08:21:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1217331 00:28:52.129 08:21:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:52.129 08:21:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:52.129 08:21:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1217331' 00:28:52.129 killing process with pid 1217331 00:28:52.129 08:21:22 -- common/autotest_common.sh@945 -- # kill 1217331 00:28:52.129 08:21:22 -- common/autotest_common.sh@950 -- # wait 1217331 00:28:52.129 08:21:22 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:52.129 [2024-06-11 08:21:05.670741] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:52.129 [2024-06-11 08:21:05.670796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1217331 ] 00:28:52.129 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.129 [2024-06-11 08:21:05.730545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.129 [2024-06-11 08:21:05.792650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.129 Running I/O for 15 seconds... 00:28:52.129 [2024-06-11 08:21:08.109058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:40720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.129 [2024-06-11 08:21:08.109553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.129 [2024-06-11 08:21:08.109570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.129 [2024-06-11 08:21:08.109579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.109586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.130 [2024-06-11 08:21:08.109601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.130 [2024-06-11 08:21:08.109617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.130 [2024-06-11 08:21:08.109634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.130 [2024-06-11 08:21:08.109650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.109668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.130 [2024-06-11 08:21:08.109685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.109706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.130 [2024-06-11 08:21:08.109721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:40960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.130 [2024-06-11 08:21:08.109737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.109753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.109769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.109786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.109802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.109817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:40480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.109834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:40488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.109851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.109866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.130 [2024-06-11 08:21:08.109882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:40984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.130 [2024-06-11 08:21:08.109899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.109915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.109931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.130 [2024-06-11 08:21:08.109947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.130 [2024-06-11 08:21:08.109962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.130 [2024-06-11 08:21:08.109978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.109987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.109994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.110003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.110010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.110019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.130 [2024-06-11 08:21:08.110026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.110035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.110042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.110051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.110058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.110067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.110074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.110083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.110090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.110099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.130 [2024-06-11 08:21:08.110110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.110121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.110128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.110137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.130 [2024-06-11 08:21:08.110144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.110153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.130 [2024-06-11 08:21:08.110160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.110170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.110177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.110186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.110193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.110202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.130 [2024-06-11 08:21:08.110209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.130 [2024-06-11 08:21:08.110221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.131 [2024-06-11 08:21:08.110327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.131 [2024-06-11 08:21:08.110392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.131 [2024-06-11 08:21:08.110443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.131 [2024-06-11 08:21:08.110507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.131 [2024-06-11 08:21:08.110524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.131 [2024-06-11 08:21:08.110555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.131 [2024-06-11 08:21:08.110587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.131 [2024-06-11 08:21:08.110618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.131 [2024-06-11 08:21:08.110650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.131 [2024-06-11 08:21:08.110665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.131 [2024-06-11 08:21:08.110681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.131 [2024-06-11 08:21:08.110732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.131 [2024-06-11 08:21:08.110750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.131 [2024-06-11 08:21:08.110852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.131 [2024-06-11 08:21:08.110861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:08.110867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.110876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:08.110883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.110892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:08.110899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.110908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:08.110915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.110923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.132 [2024-06-11 08:21:08.110931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.110941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.132 [2024-06-11 08:21:08.110948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.110957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.132 [2024-06-11 08:21:08.110964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.110974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.132 [2024-06-11 08:21:08.110981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.110990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.132 [2024-06-11 08:21:08.110997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.111005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:08.111012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.111021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:08.111028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.111037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:08.111045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.111053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.132 [2024-06-11 08:21:08.111060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.111069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:08.111076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.111085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:08.111092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.111101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:08.111107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.111116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.132 [2024-06-11 08:21:08.111123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.111132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.132 [2024-06-11 08:21:08.111141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.111150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:08.111157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.111165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:08.111173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.111195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:52.132 [2024-06-11 08:21:08.111202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:52.132 [2024-06-11 08:21:08.111208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40864 len:8 PRP1 0x0 PRP2 0x0 00:28:52.132 [2024-06-11 08:21:08.111216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.111254] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1298930 was disconnected and freed. reset controller. 00:28:52.132 [2024-06-11 08:21:08.111268] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:52.132 [2024-06-11 08:21:08.111288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.132 [2024-06-11 08:21:08.111296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.111304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.132 [2024-06-11 08:21:08.111311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.111322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.132 [2024-06-11 08:21:08.111329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.111337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.132 [2024-06-11 08:21:08.111344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:08.111351] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.132 [2024-06-11 08:21:08.113529] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.132 [2024-06-11 08:21:08.113549] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1279bd0 (9): Bad file descriptor 00:28:52.132 [2024-06-11 08:21:08.183896] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:52.132 [2024-06-11 08:21:11.686046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:11.686084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:11.686100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:11.686108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:11.686123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:11.686130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:11.686140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:11.686147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:11.686156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:11.686163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:11.686172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:11.686180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:11.686189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:11.686196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:11.686206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:11.686213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:11.686222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:11.686229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:11.686238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:11.686245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:11.686254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:11.686261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.132 [2024-06-11 08:21:11.686271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.132 [2024-06-11 08:21:11.686278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.133 [2024-06-11 08:21:11.686511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.133 [2024-06-11 08:21:11.686528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.133 [2024-06-11 08:21:11.686627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:109520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.133 [2024-06-11 08:21:11.686644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.133 [2024-06-11 08:21:11.686676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.133 [2024-06-11 08:21:11.686709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.133 [2024-06-11 08:21:11.686725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.133 [2024-06-11 08:21:11.686790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.133 [2024-06-11 08:21:11.686799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.133 [2024-06-11 08:21:11.686806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.686815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.686822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.686831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.134 [2024-06-11 08:21:11.686838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.686847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.134 [2024-06-11 08:21:11.686854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.686863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.686870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.686878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.134 [2024-06-11 08:21:11.686885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.686895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.686902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.686911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.134 [2024-06-11 08:21:11.686918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.686926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.134 [2024-06-11 08:21:11.686933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.686944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.134 [2024-06-11 08:21:11.686952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.686961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.686968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.686977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.686983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.686993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.134 [2024-06-11 08:21:11.687000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.687016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.687031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.687048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.687063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.687079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.687095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.687110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.134 [2024-06-11 08:21:11.687126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.134 [2024-06-11 08:21:11.687143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.687160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.687176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.687192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.687208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.687225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.687241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.687257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.134 [2024-06-11 08:21:11.687273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.134 [2024-06-11 08:21:11.687288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.134 [2024-06-11 08:21:11.687304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.687320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.134 [2024-06-11 08:21:11.687336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.134 [2024-06-11 08:21:11.687353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.687369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.687386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.687402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.687417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.134 [2024-06-11 08:21:11.687433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.134 [2024-06-11 08:21:11.687446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.135 [2024-06-11 08:21:11.687453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.135 [2024-06-11 08:21:11.687469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.135 [2024-06-11 08:21:11.687485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.135 [2024-06-11 08:21:11.687500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.135 [2024-06-11 08:21:11.687516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.135 [2024-06-11 08:21:11.687532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.135 [2024-06-11 08:21:11.687548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.135 [2024-06-11 08:21:11.687565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.135 [2024-06-11 08:21:11.687580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.135 [2024-06-11 08:21:11.687596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.135 [2024-06-11 08:21:11.687611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.135 [2024-06-11 08:21:11.687627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.135 [2024-06-11 08:21:11.687642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.135 [2024-06-11 08:21:11.687660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.135 [2024-06-11 08:21:11.687676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.135 [2024-06-11 08:21:11.687691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.135 [2024-06-11 08:21:11.687707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.135 [2024-06-11 08:21:11.687722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.135 [2024-06-11 08:21:11.687738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.135 [2024-06-11 08:21:11.687755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.135 [2024-06-11 08:21:11.687772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.135 [2024-06-11 08:21:11.687789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.135 [2024-06-11 08:21:11.687804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.135 [2024-06-11 08:21:11.687819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.135 [2024-06-11 08:21:11.687835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.135 [2024-06-11 08:21:11.687851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.135 [2024-06-11 08:21:11.687866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.135 [2024-06-11 08:21:11.687882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.135 [2024-06-11 08:21:11.687898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.135 [2024-06-11 08:21:11.687913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.135 [2024-06-11 08:21:11.687930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.135 [2024-06-11 08:21:11.687945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.135 [2024-06-11 08:21:11.687962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.135 [2024-06-11 08:21:11.687978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.687987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.135 [2024-06-11 08:21:11.687994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.688002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.135 [2024-06-11 08:21:11.688009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.688018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.135 [2024-06-11 08:21:11.688025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.688034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.135 [2024-06-11 08:21:11.688040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.688049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.135 [2024-06-11 08:21:11.688056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.135 [2024-06-11 08:21:11.688065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.135 [2024-06-11 08:21:11.688071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:11.688080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.136 [2024-06-11 08:21:11.688087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:11.688096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.136 [2024-06-11 08:21:11.688102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:11.688111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:11.688118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:11.688128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:11.688134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:11.688155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:52.136 [2024-06-11 08:21:11.688162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:52.136 [2024-06-11 08:21:11.688170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110096 len:8 PRP1 0x0 PRP2 0x0 00:28:52.136 [2024-06-11 08:21:11.688179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:11.688217] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1286090 was disconnected and freed. reset controller. 00:28:52.136 [2024-06-11 08:21:11.688227] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:28:52.136 [2024-06-11 08:21:11.688246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.136 [2024-06-11 08:21:11.688254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:11.688262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.136 [2024-06-11 08:21:11.688269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:11.688277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.136 [2024-06-11 08:21:11.688283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:11.688291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.136 [2024-06-11 08:21:11.688297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:11.688305] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.136 [2024-06-11 08:21:11.688329] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1279bd0 (9): Bad file descriptor 00:28:52.136 [2024-06-11 08:21:11.690641] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.136 [2024-06-11 08:21:11.766222] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:52.136 [2024-06-11 08:21:16.025122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.136 [2024-06-11 08:21:16.025164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.025173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.136 [2024-06-11 08:21:16.025181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.025190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.136 [2024-06-11 08:21:16.025197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.025205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.136 [2024-06-11 08:21:16.025212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.025219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1279bd0 is same with the state(5) to be set 00:28:52.136 [2024-06-11 08:21:16.025748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:16.025770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.025786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:16.025793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.025802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:16.025810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.025819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:16.025826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.025835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:16.025842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.025851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:16.025859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.025867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:16.025875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.025884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:16.025891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.025899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:16.025907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.025916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:16.025923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.025932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:16.025939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.025948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:16.025955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.025964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:16.025971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.025980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:16.025988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.025997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:16.026005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.026014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:16.026021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.026030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:16.026037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.026046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:16.026053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.026062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:16.026069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.026078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.136 [2024-06-11 08:21:16.026085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.026094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:16.026101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.026110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.136 [2024-06-11 08:21:16.026117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.136 [2024-06-11 08:21:16.026126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.136 [2024-06-11 08:21:16.026133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.137 [2024-06-11 08:21:16.026149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.137 [2024-06-11 08:21:16.026165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.137 [2024-06-11 08:21:16.026215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.137 [2024-06-11 08:21:16.026233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.137 [2024-06-11 08:21:16.026265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.137 [2024-06-11 08:21:16.026413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.137 [2024-06-11 08:21:16.026429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.137 [2024-06-11 08:21:16.026466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.137 [2024-06-11 08:21:16.026482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.137 [2024-06-11 08:21:16.026531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.137 [2024-06-11 08:21:16.026709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.137 [2024-06-11 08:21:16.026741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.137 [2024-06-11 08:21:16.026774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.137 [2024-06-11 08:21:16.026783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.138 [2024-06-11 08:21:16.026790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.026799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.138 [2024-06-11 08:21:16.026806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.026817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.138 [2024-06-11 08:21:16.026824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.026833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.138 [2024-06-11 08:21:16.026840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.026849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.138 [2024-06-11 08:21:16.026856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.026866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.138 [2024-06-11 08:21:16.026873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.026882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.138 [2024-06-11 08:21:16.026888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.026897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.138 [2024-06-11 08:21:16.026904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.026913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.138 [2024-06-11 08:21:16.026920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.026929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.138 [2024-06-11 08:21:16.026935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.026944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.138 [2024-06-11 08:21:16.026952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.026961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:43056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.138 [2024-06-11 08:21:16.026968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.026977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.138 [2024-06-11 08:21:16.026984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.026993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.138 [2024-06-11 08:21:16.027001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.027010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.138 [2024-06-11 08:21:16.027018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.027027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.138 [2024-06-11 08:21:16.027034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.027043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.138 [2024-06-11 08:21:16.027050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.027060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:43104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.138 [2024-06-11 08:21:16.027067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.027076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.138 [2024-06-11 08:21:16.027083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.027092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.138 [2024-06-11 08:21:16.027099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.027108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.138 [2024-06-11 08:21:16.027114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.027123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.138 [2024-06-11 08:21:16.027130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.027139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.138 [2024-06-11 08:21:16.027146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.027156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.138 [2024-06-11 08:21:16.027162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.027171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.138 [2024-06-11 08:21:16.027178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.027187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.138 [2024-06-11 08:21:16.027194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.027203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.138 [2024-06-11 08:21:16.027210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.027220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.138 [2024-06-11 08:21:16.027227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.027237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.138 [2024-06-11 08:21:16.027243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.027252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.138 [2024-06-11 08:21:16.027259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.027268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.138 [2024-06-11 08:21:16.027275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.027285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.138 [2024-06-11 08:21:16.027292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.138 [2024-06-11 08:21:16.027301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.138 [2024-06-11 08:21:16.027308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.139 [2024-06-11 08:21:16.027324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:43192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:43200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.139 [2024-06-11 08:21:16.027388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.139 [2024-06-11 08:21:16.027404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.139 [2024-06-11 08:21:16.027636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:52.139 [2024-06-11 08:21:16.027759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:43328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.139 [2024-06-11 08:21:16.027846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:52.139 [2024-06-11 08:21:16.027876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:52.139 [2024-06-11 08:21:16.027882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42728 len:8 PRP1 0x0 PRP2 0x0 00:28:52.139 [2024-06-11 08:21:16.027889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.139 [2024-06-11 08:21:16.027927] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x129c750 was disconnected and freed. reset controller. 00:28:52.139 [2024-06-11 08:21:16.027936] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:28:52.139 [2024-06-11 08:21:16.027944] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.139 [2024-06-11 08:21:16.030349] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.139 [2024-06-11 08:21:16.030376] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1279bd0 (9): Bad file descriptor 00:28:52.139 [2024-06-11 08:21:16.061204] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:52.139 00:28:52.139 Latency(us) 00:28:52.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.139 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:52.139 Verification LBA range: start 0x0 length 0x4000 00:28:52.139 NVMe0n1 : 15.00 19887.71 77.69 638.57 0.00 6219.96 542.72 12615.68 00:28:52.139 =================================================================================================================== 00:28:52.139 Total : 19887.71 77.69 638.57 0.00 6219.96 542.72 12615.68 00:28:52.139 Received shutdown signal, test time was about 15.000000 seconds 00:28:52.139 00:28:52.140 Latency(us) 00:28:52.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.140 =================================================================================================================== 00:28:52.140 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:52.140 08:21:22 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:52.140 08:21:22 -- host/failover.sh@65 -- # count=3 00:28:52.140 08:21:22 -- host/failover.sh@67 -- # (( count != 3 )) 00:28:52.140 08:21:22 -- host/failover.sh@73 -- # bdevperf_pid=1220903 00:28:52.140 08:21:22 -- host/failover.sh@75 -- # waitforlisten 1220903 /var/tmp/bdevperf.sock 00:28:52.140 08:21:22 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:52.140 08:21:22 -- common/autotest_common.sh@819 -- # '[' -z 1220903 ']' 00:28:52.140 08:21:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:52.140 08:21:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:52.140 08:21:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:52.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:52.140 08:21:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:52.140 08:21:22 -- common/autotest_common.sh@10 -- # set +x 00:28:52.711 08:21:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:52.711 08:21:23 -- common/autotest_common.sh@852 -- # return 0 00:28:52.711 08:21:23 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:52.711 [2024-06-11 08:21:23.224037] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:52.711 08:21:23 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:52.972 [2024-06-11 08:21:23.384412] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:52.972 08:21:23 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:53.234 NVMe0n1 00:28:53.234 08:21:23 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:53.515 00:28:53.515 08:21:24 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:53.812 00:28:53.812 08:21:24 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:53.812 08:21:24 -- host/failover.sh@82 -- # grep -q NVMe0 00:28:54.074 08:21:24 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:54.074 08:21:24 -- host/failover.sh@87 -- # sleep 3 00:28:57.373 08:21:27 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:57.373 08:21:27 -- host/failover.sh@88 -- # grep -q NVMe0 00:28:57.373 08:21:27 -- host/failover.sh@90 -- # run_test_pid=1222100 00:28:57.373 08:21:27 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:57.373 08:21:27 -- host/failover.sh@92 -- # wait 1222100 00:28:58.314 0 00:28:58.574 08:21:28 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:58.574 [2024-06-11 08:21:22.335926] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:58.574 [2024-06-11 08:21:22.335987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1220903 ] 00:28:58.574 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.574 [2024-06-11 08:21:22.396425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.574 [2024-06-11 08:21:22.458793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.574 [2024-06-11 08:21:24.670532] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:58.574 [2024-06-11 08:21:24.670577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.574 [2024-06-11 08:21:24.670588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.574 [2024-06-11 08:21:24.670597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.574 [2024-06-11 08:21:24.670605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.574 [2024-06-11 08:21:24.670612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.574 [2024-06-11 08:21:24.670619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.574 [2024-06-11 08:21:24.670629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.574 [2024-06-11 08:21:24.670636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.574 [2024-06-11 08:21:24.670643] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:58.574 [2024-06-11 08:21:24.670667] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:58.574 [2024-06-11 08:21:24.670681] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139bbd0 (9): Bad file descriptor 00:28:58.574 [2024-06-11 08:21:24.720696] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:58.574 Running I/O for 1 seconds... 00:28:58.575 00:28:58.575 Latency(us) 00:28:58.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.575 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:58.575 Verification LBA range: start 0x0 length 0x4000 00:28:58.575 NVMe0n1 : 1.00 20044.46 78.30 0.00 0.00 6357.00 1160.53 11796.48 00:28:58.575 =================================================================================================================== 00:28:58.575 Total : 20044.46 78.30 0.00 0.00 6357.00 1160.53 11796.48 00:28:58.575 08:21:28 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:58.575 08:21:28 -- host/failover.sh@95 -- # grep -q NVMe0 00:28:58.575 08:21:29 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:58.836 08:21:29 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:58.836 08:21:29 -- host/failover.sh@99 -- # grep -q NVMe0 00:28:58.836 08:21:29 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:59.098 08:21:29 -- host/failover.sh@101 -- # sleep 3 00:29:02.399 08:21:32 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:02.399 08:21:32 -- host/failover.sh@103 -- # grep -q NVMe0 00:29:02.399 08:21:32 -- host/failover.sh@108 -- # killprocess 1220903 00:29:02.399 08:21:32 -- common/autotest_common.sh@926 -- # '[' -z 1220903 ']' 00:29:02.399 08:21:32 -- common/autotest_common.sh@930 -- # kill -0 1220903 00:29:02.399 08:21:32 -- common/autotest_common.sh@931 -- # uname 00:29:02.399 08:21:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:02.399 08:21:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1220903 00:29:02.399 08:21:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:02.399 08:21:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:02.399 08:21:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1220903' 00:29:02.399 killing process with pid 1220903 00:29:02.399 08:21:32 -- common/autotest_common.sh@945 -- # kill 1220903 00:29:02.399 08:21:32 -- common/autotest_common.sh@950 -- # wait 1220903 00:29:02.399 08:21:32 -- host/failover.sh@110 -- # sync 00:29:02.399 08:21:32 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:02.661 08:21:33 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:02.661 08:21:33 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:02.661 08:21:33 -- host/failover.sh@116 -- # nvmftestfini 00:29:02.661 08:21:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:02.661 08:21:33 -- nvmf/common.sh@116 -- # sync 00:29:02.661 08:21:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:02.661 08:21:33 -- nvmf/common.sh@119 -- # set +e 00:29:02.661 08:21:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:02.661 08:21:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:02.661 rmmod nvme_tcp 00:29:02.661 rmmod nvme_fabrics 00:29:02.661 rmmod nvme_keyring 00:29:02.661 08:21:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:02.661 08:21:33 -- nvmf/common.sh@123 -- # set -e 00:29:02.661 08:21:33 -- nvmf/common.sh@124 -- # return 0 00:29:02.661 08:21:33 -- nvmf/common.sh@477 -- # '[' -n 1216873 ']' 00:29:02.661 08:21:33 -- nvmf/common.sh@478 -- # killprocess 1216873 00:29:02.661 08:21:33 -- common/autotest_common.sh@926 -- # '[' -z 1216873 ']' 00:29:02.661 08:21:33 -- common/autotest_common.sh@930 -- # kill -0 1216873 00:29:02.661 08:21:33 -- common/autotest_common.sh@931 -- # uname 00:29:02.661 08:21:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:02.661 08:21:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1216873 00:29:02.661 08:21:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:02.661 08:21:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:02.661 08:21:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1216873' 00:29:02.661 killing process with pid 1216873 00:29:02.661 08:21:33 -- common/autotest_common.sh@945 -- # kill 1216873 00:29:02.661 08:21:33 -- common/autotest_common.sh@950 -- # wait 1216873 00:29:02.922 08:21:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:02.922 08:21:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:02.922 08:21:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:02.922 08:21:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:02.922 08:21:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:02.922 08:21:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.922 08:21:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:02.922 08:21:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.836 08:21:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:04.836 00:29:04.836 real 0m39.358s 00:29:04.836 user 2m0.827s 00:29:04.836 sys 0m8.191s 00:29:04.836 08:21:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:04.836 08:21:35 -- common/autotest_common.sh@10 -- # set +x 00:29:04.836 ************************************ 00:29:04.836 END TEST nvmf_failover 00:29:04.836 ************************************ 00:29:05.097 08:21:35 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:05.097 08:21:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:05.097 08:21:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:05.097 08:21:35 -- common/autotest_common.sh@10 -- # set +x 00:29:05.097 ************************************ 00:29:05.097 START TEST nvmf_discovery 00:29:05.097 ************************************ 00:29:05.097 08:21:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:05.097 * Looking for test storage... 00:29:05.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:05.097 08:21:35 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:05.097 08:21:35 -- nvmf/common.sh@7 -- # uname -s 00:29:05.097 08:21:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:05.097 08:21:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:05.097 08:21:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:05.097 08:21:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:05.097 08:21:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:05.097 08:21:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:05.097 08:21:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:05.097 08:21:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:05.097 08:21:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:05.097 08:21:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:05.097 08:21:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:05.097 08:21:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:05.097 08:21:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:05.098 08:21:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:05.098 08:21:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:05.098 08:21:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:05.098 08:21:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.098 08:21:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.098 08:21:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.098 08:21:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.098 08:21:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.098 08:21:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.098 08:21:35 -- paths/export.sh@5 -- # export PATH 00:29:05.098 08:21:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.098 08:21:35 -- nvmf/common.sh@46 -- # : 0 00:29:05.098 08:21:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:05.098 08:21:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:05.098 08:21:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:05.098 08:21:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:05.098 08:21:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:05.098 08:21:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:05.098 08:21:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:05.098 08:21:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:05.098 08:21:35 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:05.098 08:21:35 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:05.098 08:21:35 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:05.098 08:21:35 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:05.098 08:21:35 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:05.098 08:21:35 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:05.098 08:21:35 -- host/discovery.sh@25 -- # nvmftestinit 00:29:05.098 08:21:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:05.098 08:21:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:05.098 08:21:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:05.098 08:21:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:05.098 08:21:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:05.098 08:21:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.098 08:21:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:05.098 08:21:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.098 08:21:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:05.098 08:21:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:05.098 08:21:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:05.098 08:21:35 -- common/autotest_common.sh@10 -- # set +x 00:29:13.266 08:21:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:13.266 08:21:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:13.266 08:21:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:13.266 08:21:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:13.266 08:21:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:13.266 08:21:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:13.266 08:21:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:13.266 08:21:42 -- nvmf/common.sh@294 -- # net_devs=() 00:29:13.266 08:21:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:13.266 08:21:42 -- nvmf/common.sh@295 -- # e810=() 00:29:13.266 08:21:42 -- nvmf/common.sh@295 -- # local -ga e810 00:29:13.266 08:21:42 -- nvmf/common.sh@296 -- # x722=() 00:29:13.266 08:21:42 -- nvmf/common.sh@296 -- # local -ga x722 00:29:13.266 08:21:42 -- nvmf/common.sh@297 -- # mlx=() 00:29:13.266 08:21:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:13.266 08:21:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.266 08:21:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.266 08:21:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.266 08:21:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.266 08:21:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.266 08:21:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.266 08:21:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.266 08:21:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.266 08:21:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.266 08:21:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.266 08:21:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.266 08:21:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:13.266 08:21:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:13.266 08:21:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:13.266 08:21:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:13.266 08:21:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:13.266 08:21:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:13.266 08:21:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:13.266 08:21:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:13.266 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:13.266 08:21:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:13.266 08:21:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:13.266 08:21:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.266 08:21:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.266 08:21:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:13.266 08:21:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:13.266 08:21:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:13.266 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:13.266 08:21:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:13.266 08:21:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:13.266 08:21:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.266 08:21:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.266 08:21:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:13.266 08:21:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:13.266 08:21:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:13.266 08:21:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:13.266 08:21:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:13.266 08:21:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.266 08:21:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:13.266 08:21:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.266 08:21:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:13.266 Found net devices under 0000:31:00.0: cvl_0_0 00:29:13.266 08:21:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.266 08:21:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:13.266 08:21:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.266 08:21:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:13.266 08:21:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.266 08:21:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:13.266 Found net devices under 0000:31:00.1: cvl_0_1 00:29:13.266 08:21:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.266 08:21:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:13.266 08:21:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:13.266 08:21:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:13.266 08:21:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:13.266 08:21:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:13.266 08:21:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.266 08:21:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.266 08:21:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:13.266 08:21:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:13.266 08:21:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:13.266 08:21:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:13.266 08:21:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:13.266 08:21:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:13.266 08:21:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.266 08:21:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:13.266 08:21:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:13.266 08:21:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:13.266 08:21:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:13.266 08:21:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:13.266 08:21:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:13.266 08:21:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:13.266 08:21:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.266 08:21:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.266 08:21:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.266 08:21:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:13.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:29:13.266 00:29:13.266 --- 10.0.0.2 ping statistics --- 00:29:13.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.266 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:29:13.266 08:21:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:29:13.266 00:29:13.266 --- 10.0.0.1 ping statistics --- 00:29:13.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.266 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:29:13.266 08:21:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.266 08:21:42 -- nvmf/common.sh@410 -- # return 0 00:29:13.266 08:21:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:13.266 08:21:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.266 08:21:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:13.266 08:21:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:13.266 08:21:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.266 08:21:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:13.266 08:21:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:13.266 08:21:42 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:13.266 08:21:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:13.266 08:21:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:13.266 08:21:42 -- common/autotest_common.sh@10 -- # set +x 00:29:13.266 08:21:42 -- nvmf/common.sh@469 -- # nvmfpid=1227303 00:29:13.266 08:21:42 -- nvmf/common.sh@470 -- # waitforlisten 1227303 00:29:13.266 08:21:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:13.266 08:21:42 -- common/autotest_common.sh@819 -- # '[' -z 1227303 ']' 00:29:13.266 08:21:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.266 08:21:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:13.266 08:21:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.266 08:21:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:13.266 08:21:42 -- common/autotest_common.sh@10 -- # set +x 00:29:13.266 [2024-06-11 08:21:43.027663] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:13.267 [2024-06-11 08:21:43.027715] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.267 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.267 [2024-06-11 08:21:43.111814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.267 [2024-06-11 08:21:43.200484] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:13.267 [2024-06-11 08:21:43.200645] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.267 [2024-06-11 08:21:43.200662] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.267 [2024-06-11 08:21:43.200669] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.267 [2024-06-11 08:21:43.200696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.267 08:21:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:13.267 08:21:43 -- common/autotest_common.sh@852 -- # return 0 00:29:13.267 08:21:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:13.267 08:21:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:13.267 08:21:43 -- common/autotest_common.sh@10 -- # set +x 00:29:13.267 08:21:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.267 08:21:43 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:13.267 08:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.267 08:21:43 -- common/autotest_common.sh@10 -- # set +x 00:29:13.267 [2024-06-11 08:21:43.852265] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.267 08:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.267 08:21:43 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:13.267 08:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.267 08:21:43 -- common/autotest_common.sh@10 -- # set +x 00:29:13.267 [2024-06-11 08:21:43.860513] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:13.267 08:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.267 08:21:43 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:13.267 08:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.267 08:21:43 -- common/autotest_common.sh@10 -- # set +x 00:29:13.267 null0 00:29:13.267 08:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.267 08:21:43 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:13.267 08:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.267 08:21:43 -- common/autotest_common.sh@10 -- # set +x 00:29:13.267 null1 00:29:13.267 08:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.267 08:21:43 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:13.267 08:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.267 08:21:43 -- common/autotest_common.sh@10 -- # set +x 00:29:13.267 08:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.267 08:21:43 -- host/discovery.sh@45 -- # hostpid=1227544 00:29:13.267 08:21:43 -- host/discovery.sh@46 -- # waitforlisten 1227544 /tmp/host.sock 00:29:13.267 08:21:43 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:13.267 08:21:43 -- common/autotest_common.sh@819 -- # '[' -z 1227544 ']' 00:29:13.267 08:21:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:29:13.267 08:21:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:13.267 08:21:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:13.267 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:13.267 08:21:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:13.267 08:21:43 -- common/autotest_common.sh@10 -- # set +x 00:29:13.528 [2024-06-11 08:21:43.938711] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:13.528 [2024-06-11 08:21:43.938770] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1227544 ] 00:29:13.528 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.528 [2024-06-11 08:21:44.003164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.528 [2024-06-11 08:21:44.075475] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:13.528 [2024-06-11 08:21:44.075611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.097 08:21:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:14.097 08:21:44 -- common/autotest_common.sh@852 -- # return 0 00:29:14.097 08:21:44 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:14.097 08:21:44 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:14.097 08:21:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.097 08:21:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.097 08:21:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.097 08:21:44 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:14.097 08:21:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.097 08:21:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.097 08:21:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.097 08:21:44 -- host/discovery.sh@72 -- # notify_id=0 00:29:14.097 08:21:44 -- host/discovery.sh@78 -- # get_subsystem_names 00:29:14.097 08:21:44 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:14.097 08:21:44 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:14.097 08:21:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.097 08:21:44 -- host/discovery.sh@59 -- # sort 00:29:14.097 08:21:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.097 08:21:44 -- host/discovery.sh@59 -- # xargs 00:29:14.355 08:21:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.356 08:21:44 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:29:14.356 08:21:44 -- host/discovery.sh@79 -- # get_bdev_list 00:29:14.356 08:21:44 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:14.356 08:21:44 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:14.356 08:21:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.356 08:21:44 -- host/discovery.sh@55 -- # sort 00:29:14.356 08:21:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.356 08:21:44 -- host/discovery.sh@55 -- # xargs 00:29:14.356 08:21:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.356 08:21:44 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:29:14.356 08:21:44 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:14.356 08:21:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.356 08:21:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.356 08:21:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.356 08:21:44 -- host/discovery.sh@82 -- # get_subsystem_names 00:29:14.356 08:21:44 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:14.356 08:21:44 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:14.356 08:21:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.356 08:21:44 -- host/discovery.sh@59 -- # sort 00:29:14.356 08:21:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.356 08:21:44 -- host/discovery.sh@59 -- # xargs 00:29:14.356 08:21:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.356 08:21:44 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:29:14.356 08:21:44 -- host/discovery.sh@83 -- # get_bdev_list 00:29:14.356 08:21:44 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:14.356 08:21:44 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:14.356 08:21:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.356 08:21:44 -- host/discovery.sh@55 -- # sort 00:29:14.356 08:21:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.356 08:21:44 -- host/discovery.sh@55 -- # xargs 00:29:14.356 08:21:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.356 08:21:44 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:14.356 08:21:44 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:14.356 08:21:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.356 08:21:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.356 08:21:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.356 08:21:44 -- host/discovery.sh@86 -- # get_subsystem_names 00:29:14.356 08:21:44 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:14.356 08:21:44 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:14.356 08:21:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.356 08:21:44 -- host/discovery.sh@59 -- # sort 00:29:14.356 08:21:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.356 08:21:44 -- host/discovery.sh@59 -- # xargs 00:29:14.356 08:21:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.615 08:21:45 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:29:14.615 08:21:45 -- host/discovery.sh@87 -- # get_bdev_list 00:29:14.615 08:21:45 -- host/discovery.sh@55 -- # xargs 00:29:14.615 08:21:45 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:14.615 08:21:45 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:14.615 08:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.615 08:21:45 -- host/discovery.sh@55 -- # sort 00:29:14.615 08:21:45 -- common/autotest_common.sh@10 -- # set +x 00:29:14.615 08:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.615 08:21:45 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:14.615 08:21:45 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:14.615 08:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.615 08:21:45 -- common/autotest_common.sh@10 -- # set +x 00:29:14.615 [2024-06-11 08:21:45.075566] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.615 08:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.615 08:21:45 -- host/discovery.sh@92 -- # get_subsystem_names 00:29:14.615 08:21:45 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:14.615 08:21:45 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:14.615 08:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.615 08:21:45 -- host/discovery.sh@59 -- # sort 00:29:14.615 08:21:45 -- common/autotest_common.sh@10 -- # set +x 00:29:14.615 08:21:45 -- host/discovery.sh@59 -- # xargs 00:29:14.615 08:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.615 08:21:45 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:14.615 08:21:45 -- host/discovery.sh@93 -- # get_bdev_list 00:29:14.615 08:21:45 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:14.615 08:21:45 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:14.615 08:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.615 08:21:45 -- host/discovery.sh@55 -- # sort 00:29:14.615 08:21:45 -- common/autotest_common.sh@10 -- # set +x 00:29:14.615 08:21:45 -- host/discovery.sh@55 -- # xargs 00:29:14.615 08:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.615 08:21:45 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:29:14.615 08:21:45 -- host/discovery.sh@94 -- # get_notification_count 00:29:14.615 08:21:45 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:14.615 08:21:45 -- host/discovery.sh@74 -- # jq '. | length' 00:29:14.615 08:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.615 08:21:45 -- common/autotest_common.sh@10 -- # set +x 00:29:14.615 08:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.615 08:21:45 -- host/discovery.sh@74 -- # notification_count=0 00:29:14.615 08:21:45 -- host/discovery.sh@75 -- # notify_id=0 00:29:14.615 08:21:45 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:29:14.615 08:21:45 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:14.615 08:21:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:14.615 08:21:45 -- common/autotest_common.sh@10 -- # set +x 00:29:14.615 08:21:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:14.615 08:21:45 -- host/discovery.sh@100 -- # sleep 1 00:29:15.182 [2024-06-11 08:21:45.782607] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:15.182 [2024-06-11 08:21:45.782626] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:15.182 [2024-06-11 08:21:45.782640] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:15.440 [2024-06-11 08:21:45.870927] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:15.440 [2024-06-11 08:21:46.055682] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:15.440 [2024-06-11 08:21:46.055705] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:15.700 08:21:46 -- host/discovery.sh@101 -- # get_subsystem_names 00:29:15.700 08:21:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:15.700 08:21:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:15.700 08:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:15.700 08:21:46 -- host/discovery.sh@59 -- # sort 00:29:15.700 08:21:46 -- common/autotest_common.sh@10 -- # set +x 00:29:15.700 08:21:46 -- host/discovery.sh@59 -- # xargs 00:29:15.700 08:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:15.700 08:21:46 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.700 08:21:46 -- host/discovery.sh@102 -- # get_bdev_list 00:29:15.700 08:21:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:15.700 08:21:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:15.700 08:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:15.700 08:21:46 -- host/discovery.sh@55 -- # sort 00:29:15.700 08:21:46 -- common/autotest_common.sh@10 -- # set +x 00:29:15.700 08:21:46 -- host/discovery.sh@55 -- # xargs 00:29:15.700 08:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:15.959 08:21:46 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:29:15.959 08:21:46 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:29:15.959 08:21:46 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:15.959 08:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:15.959 08:21:46 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:15.959 08:21:46 -- common/autotest_common.sh@10 -- # set +x 00:29:15.959 08:21:46 -- host/discovery.sh@63 -- # sort -n 00:29:15.959 08:21:46 -- host/discovery.sh@63 -- # xargs 00:29:15.959 08:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:15.959 08:21:46 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:29:15.959 08:21:46 -- host/discovery.sh@104 -- # get_notification_count 00:29:15.959 08:21:46 -- host/discovery.sh@74 -- # jq '. | length' 00:29:15.959 08:21:46 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:15.959 08:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:15.959 08:21:46 -- common/autotest_common.sh@10 -- # set +x 00:29:15.959 08:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:15.959 08:21:46 -- host/discovery.sh@74 -- # notification_count=1 00:29:15.959 08:21:46 -- host/discovery.sh@75 -- # notify_id=1 00:29:15.960 08:21:46 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:29:15.960 08:21:46 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:15.960 08:21:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:15.960 08:21:46 -- common/autotest_common.sh@10 -- # set +x 00:29:15.960 08:21:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:15.960 08:21:46 -- host/discovery.sh@109 -- # sleep 1 00:29:16.900 08:21:47 -- host/discovery.sh@110 -- # get_bdev_list 00:29:16.900 08:21:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:16.900 08:21:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:16.900 08:21:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:16.900 08:21:47 -- host/discovery.sh@55 -- # sort 00:29:16.900 08:21:47 -- common/autotest_common.sh@10 -- # set +x 00:29:16.900 08:21:47 -- host/discovery.sh@55 -- # xargs 00:29:16.900 08:21:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:16.900 08:21:47 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:16.900 08:21:47 -- host/discovery.sh@111 -- # get_notification_count 00:29:16.900 08:21:47 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:29:16.900 08:21:47 -- host/discovery.sh@74 -- # jq '. | length' 00:29:16.900 08:21:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:16.900 08:21:47 -- common/autotest_common.sh@10 -- # set +x 00:29:16.900 08:21:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:16.900 08:21:47 -- host/discovery.sh@74 -- # notification_count=1 00:29:16.900 08:21:47 -- host/discovery.sh@75 -- # notify_id=2 00:29:17.158 08:21:47 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:29:17.158 08:21:47 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:17.158 08:21:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:17.158 08:21:47 -- common/autotest_common.sh@10 -- # set +x 00:29:17.158 [2024-06-11 08:21:47.550330] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:17.158 [2024-06-11 08:21:47.551356] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:17.159 [2024-06-11 08:21:47.551381] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:17.159 08:21:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:17.159 08:21:47 -- host/discovery.sh@117 -- # sleep 1 00:29:17.159 [2024-06-11 08:21:47.681793] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:29:17.159 [2024-06-11 08:21:47.788560] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:17.159 [2024-06-11 08:21:47.788577] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:17.159 [2024-06-11 08:21:47.788583] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:18.096 08:21:48 -- host/discovery.sh@118 -- # get_subsystem_names 00:29:18.096 08:21:48 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:18.096 08:21:48 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:18.096 08:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:18.096 08:21:48 -- host/discovery.sh@59 -- # sort 00:29:18.096 08:21:48 -- common/autotest_common.sh@10 -- # set +x 00:29:18.096 08:21:48 -- host/discovery.sh@59 -- # xargs 00:29:18.096 08:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:18.096 08:21:48 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.096 08:21:48 -- host/discovery.sh@119 -- # get_bdev_list 00:29:18.096 08:21:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:18.096 08:21:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:18.096 08:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:18.096 08:21:48 -- host/discovery.sh@55 -- # sort 00:29:18.096 08:21:48 -- common/autotest_common.sh@10 -- # set +x 00:29:18.096 08:21:48 -- host/discovery.sh@55 -- # xargs 00:29:18.096 08:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:18.096 08:21:48 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:18.096 08:21:48 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:29:18.096 08:21:48 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:18.096 08:21:48 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:18.096 08:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:18.096 08:21:48 -- common/autotest_common.sh@10 -- # set +x 00:29:18.096 08:21:48 -- host/discovery.sh@63 -- # sort -n 00:29:18.096 08:21:48 -- host/discovery.sh@63 -- # xargs 00:29:18.096 08:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:18.096 08:21:48 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:18.096 08:21:48 -- host/discovery.sh@121 -- # get_notification_count 00:29:18.096 08:21:48 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:18.096 08:21:48 -- host/discovery.sh@74 -- # jq '. | length' 00:29:18.096 08:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:18.096 08:21:48 -- common/autotest_common.sh@10 -- # set +x 00:29:18.096 08:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:18.357 08:21:48 -- host/discovery.sh@74 -- # notification_count=0 00:29:18.357 08:21:48 -- host/discovery.sh@75 -- # notify_id=2 00:29:18.357 08:21:48 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:29:18.357 08:21:48 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:18.357 08:21:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:18.357 08:21:48 -- common/autotest_common.sh@10 -- # set +x 00:29:18.358 [2024-06-11 08:21:48.749531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.358 [2024-06-11 08:21:48.749556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.358 [2024-06-11 08:21:48.749565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.358 [2024-06-11 08:21:48.749573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.358 [2024-06-11 08:21:48.749581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.358 [2024-06-11 08:21:48.749588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.358 [2024-06-11 08:21:48.749595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:18.358 [2024-06-11 08:21:48.749607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.358 [2024-06-11 08:21:48.749614] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fcb10 is same with the state(5) to be set 00:29:18.358 [2024-06-11 08:21:48.750479] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:18.358 [2024-06-11 08:21:48.750494] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:18.358 08:21:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:18.358 08:21:48 -- host/discovery.sh@127 -- # sleep 1 00:29:18.358 [2024-06-11 08:21:48.759532] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6fcb10 (9): Bad file descriptor 00:29:18.358 [2024-06-11 08:21:48.769571] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:18.358 [2024-06-11 08:21:48.769945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.358 [2024-06-11 08:21:48.770250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.358 [2024-06-11 08:21:48.770261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6fcb10 with addr=10.0.0.2, port=4420 00:29:18.358 [2024-06-11 08:21:48.770269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fcb10 is same with the state(5) to be set 00:29:18.358 [2024-06-11 08:21:48.770281] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6fcb10 (9): Bad file descriptor 00:29:18.358 [2024-06-11 08:21:48.770291] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:18.358 [2024-06-11 08:21:48.770298] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:18.358 [2024-06-11 08:21:48.770306] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:18.358 [2024-06-11 08:21:48.770318] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.358 [2024-06-11 08:21:48.779627] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:18.358 [2024-06-11 08:21:48.779947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.358 [2024-06-11 08:21:48.780287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.358 [2024-06-11 08:21:48.780298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6fcb10 with addr=10.0.0.2, port=4420 00:29:18.358 [2024-06-11 08:21:48.780305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fcb10 is same with the state(5) to be set 00:29:18.358 [2024-06-11 08:21:48.780316] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6fcb10 (9): Bad file descriptor 00:29:18.358 [2024-06-11 08:21:48.780326] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:18.358 [2024-06-11 08:21:48.780333] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:18.358 [2024-06-11 08:21:48.780339] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:18.358 [2024-06-11 08:21:48.780350] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.358 [2024-06-11 08:21:48.789678] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:18.358 [2024-06-11 08:21:48.789965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.358 [2024-06-11 08:21:48.790311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.358 [2024-06-11 08:21:48.790322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6fcb10 with addr=10.0.0.2, port=4420 00:29:18.358 [2024-06-11 08:21:48.790330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fcb10 is same with the state(5) to be set 00:29:18.358 [2024-06-11 08:21:48.790344] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6fcb10 (9): Bad file descriptor 00:29:18.358 [2024-06-11 08:21:48.790354] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:18.358 [2024-06-11 08:21:48.790361] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:18.358 [2024-06-11 08:21:48.790368] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:18.358 [2024-06-11 08:21:48.790378] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.358 [2024-06-11 08:21:48.799730] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:18.358 [2024-06-11 08:21:48.800100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.358 [2024-06-11 08:21:48.800402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.358 [2024-06-11 08:21:48.800413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6fcb10 with addr=10.0.0.2, port=4420 00:29:18.358 [2024-06-11 08:21:48.800421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fcb10 is same with the state(5) to be set 00:29:18.358 [2024-06-11 08:21:48.800432] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6fcb10 (9): Bad file descriptor 00:29:18.358 [2024-06-11 08:21:48.800449] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:18.358 [2024-06-11 08:21:48.800456] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:18.358 [2024-06-11 08:21:48.800463] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:18.358 [2024-06-11 08:21:48.800474] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.358 [2024-06-11 08:21:48.809785] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:18.358 [2024-06-11 08:21:48.809957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.358 [2024-06-11 08:21:48.810251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.358 [2024-06-11 08:21:48.810262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6fcb10 with addr=10.0.0.2, port=4420 00:29:18.358 [2024-06-11 08:21:48.810269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fcb10 is same with the state(5) to be set 00:29:18.358 [2024-06-11 08:21:48.810279] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6fcb10 (9): Bad file descriptor 00:29:18.358 [2024-06-11 08:21:48.810289] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:18.358 [2024-06-11 08:21:48.810295] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:18.358 [2024-06-11 08:21:48.810302] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:18.358 [2024-06-11 08:21:48.810313] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.358 [2024-06-11 08:21:48.819835] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:18.358 [2024-06-11 08:21:48.820147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.358 [2024-06-11 08:21:48.820448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.358 [2024-06-11 08:21:48.820459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6fcb10 with addr=10.0.0.2, port=4420 00:29:18.358 [2024-06-11 08:21:48.820466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fcb10 is same with the state(5) to be set 00:29:18.358 [2024-06-11 08:21:48.820477] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6fcb10 (9): Bad file descriptor 00:29:18.358 [2024-06-11 08:21:48.820487] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:18.358 [2024-06-11 08:21:48.820497] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:18.358 [2024-06-11 08:21:48.820504] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:18.358 [2024-06-11 08:21:48.820515] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.358 [2024-06-11 08:21:48.829886] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:18.358 [2024-06-11 08:21:48.830244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.358 [2024-06-11 08:21:48.830665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.358 [2024-06-11 08:21:48.830703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6fcb10 with addr=10.0.0.2, port=4420 00:29:18.358 [2024-06-11 08:21:48.830713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fcb10 is same with the state(5) to be set 00:29:18.358 [2024-06-11 08:21:48.830731] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6fcb10 (9): Bad file descriptor 00:29:18.358 [2024-06-11 08:21:48.830758] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:18.358 [2024-06-11 08:21:48.830766] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:18.358 [2024-06-11 08:21:48.830774] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:18.358 [2024-06-11 08:21:48.830789] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.358 [2024-06-11 08:21:48.838774] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:18.358 [2024-06-11 08:21:48.838793] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:19.298 08:21:49 -- host/discovery.sh@128 -- # get_subsystem_names 00:29:19.298 08:21:49 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:19.298 08:21:49 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:19.298 08:21:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:19.298 08:21:49 -- host/discovery.sh@59 -- # sort 00:29:19.298 08:21:49 -- common/autotest_common.sh@10 -- # set +x 00:29:19.298 08:21:49 -- host/discovery.sh@59 -- # xargs 00:29:19.298 08:21:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:19.298 08:21:49 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:19.298 08:21:49 -- host/discovery.sh@129 -- # get_bdev_list 00:29:19.298 08:21:49 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:19.298 08:21:49 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:19.298 08:21:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:19.298 08:21:49 -- host/discovery.sh@55 -- # sort 00:29:19.298 08:21:49 -- common/autotest_common.sh@10 -- # set +x 00:29:19.298 08:21:49 -- host/discovery.sh@55 -- # xargs 00:29:19.298 08:21:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:19.298 08:21:49 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:19.298 08:21:49 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:29:19.298 08:21:49 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:19.298 08:21:49 -- host/discovery.sh@63 -- # xargs 00:29:19.298 08:21:49 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:19.298 08:21:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:19.298 08:21:49 -- host/discovery.sh@63 -- # sort -n 00:29:19.298 08:21:49 -- common/autotest_common.sh@10 -- # set +x 00:29:19.298 08:21:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:19.298 08:21:49 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:29:19.298 08:21:49 -- host/discovery.sh@131 -- # get_notification_count 00:29:19.298 08:21:49 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:19.298 08:21:49 -- host/discovery.sh@74 -- # jq '. | length' 00:29:19.298 08:21:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:19.298 08:21:49 -- common/autotest_common.sh@10 -- # set +x 00:29:19.298 08:21:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:19.557 08:21:49 -- host/discovery.sh@74 -- # notification_count=0 00:29:19.557 08:21:49 -- host/discovery.sh@75 -- # notify_id=2 00:29:19.557 08:21:49 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:29:19.557 08:21:49 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:29:19.557 08:21:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:19.557 08:21:49 -- common/autotest_common.sh@10 -- # set +x 00:29:19.557 08:21:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:19.557 08:21:49 -- host/discovery.sh@135 -- # sleep 1 00:29:20.493 08:21:50 -- host/discovery.sh@136 -- # get_subsystem_names 00:29:20.493 08:21:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:20.493 08:21:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:20.493 08:21:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:20.493 08:21:50 -- host/discovery.sh@59 -- # sort 00:29:20.493 08:21:50 -- common/autotest_common.sh@10 -- # set +x 00:29:20.493 08:21:50 -- host/discovery.sh@59 -- # xargs 00:29:20.493 08:21:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:20.493 08:21:51 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:29:20.493 08:21:51 -- host/discovery.sh@137 -- # get_bdev_list 00:29:20.493 08:21:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:20.493 08:21:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:20.493 08:21:51 -- host/discovery.sh@55 -- # sort 00:29:20.493 08:21:51 -- host/discovery.sh@55 -- # xargs 00:29:20.493 08:21:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:20.493 08:21:51 -- common/autotest_common.sh@10 -- # set +x 00:29:20.494 08:21:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:20.494 08:21:51 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:29:20.494 08:21:51 -- host/discovery.sh@138 -- # get_notification_count 00:29:20.494 08:21:51 -- host/discovery.sh@74 -- # jq '. | length' 00:29:20.494 08:21:51 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:20.494 08:21:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:20.494 08:21:51 -- common/autotest_common.sh@10 -- # set +x 00:29:20.494 08:21:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:20.494 08:21:51 -- host/discovery.sh@74 -- # notification_count=2 00:29:20.494 08:21:51 -- host/discovery.sh@75 -- # notify_id=4 00:29:20.494 08:21:51 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:29:20.494 08:21:51 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:20.494 08:21:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:20.494 08:21:51 -- common/autotest_common.sh@10 -- # set +x 00:29:21.874 [2024-06-11 08:21:52.127020] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:21.874 [2024-06-11 08:21:52.127039] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:21.874 [2024-06-11 08:21:52.127053] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:21.874 [2024-06-11 08:21:52.256477] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:29:21.874 [2024-06-11 08:21:52.320311] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:21.874 [2024-06-11 08:21:52.320349] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:21.874 08:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.874 08:21:52 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:21.874 08:21:52 -- common/autotest_common.sh@640 -- # local es=0 00:29:21.874 08:21:52 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:21.874 08:21:52 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:21.874 08:21:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:21.874 08:21:52 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:21.874 08:21:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:21.874 08:21:52 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:21.874 08:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.874 08:21:52 -- common/autotest_common.sh@10 -- # set +x 00:29:21.874 request: 00:29:21.874 { 00:29:21.874 "name": "nvme", 00:29:21.874 "trtype": "tcp", 00:29:21.874 "traddr": "10.0.0.2", 00:29:21.874 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:21.874 "adrfam": "ipv4", 00:29:21.874 "trsvcid": "8009", 00:29:21.874 "wait_for_attach": true, 00:29:21.874 "method": "bdev_nvme_start_discovery", 00:29:21.874 "req_id": 1 00:29:21.874 } 00:29:21.874 Got JSON-RPC error response 00:29:21.874 response: 00:29:21.874 { 00:29:21.874 "code": -17, 00:29:21.874 "message": "File exists" 00:29:21.874 } 00:29:21.874 08:21:52 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:21.874 08:21:52 -- common/autotest_common.sh@643 -- # es=1 00:29:21.874 08:21:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:21.874 08:21:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:21.874 08:21:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:21.874 08:21:52 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:29:21.874 08:21:52 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:21.874 08:21:52 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:21.874 08:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.874 08:21:52 -- host/discovery.sh@67 -- # sort 00:29:21.874 08:21:52 -- common/autotest_common.sh@10 -- # set +x 00:29:21.874 08:21:52 -- host/discovery.sh@67 -- # xargs 00:29:21.874 08:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.874 08:21:52 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:29:21.874 08:21:52 -- host/discovery.sh@147 -- # get_bdev_list 00:29:21.874 08:21:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:21.874 08:21:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:21.874 08:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.874 08:21:52 -- host/discovery.sh@55 -- # sort 00:29:21.874 08:21:52 -- common/autotest_common.sh@10 -- # set +x 00:29:21.874 08:21:52 -- host/discovery.sh@55 -- # xargs 00:29:21.874 08:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.874 08:21:52 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:21.874 08:21:52 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:21.874 08:21:52 -- common/autotest_common.sh@640 -- # local es=0 00:29:21.874 08:21:52 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:21.874 08:21:52 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:21.874 08:21:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:21.874 08:21:52 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:21.874 08:21:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:21.874 08:21:52 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:21.874 08:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.874 08:21:52 -- common/autotest_common.sh@10 -- # set +x 00:29:21.874 request: 00:29:21.874 { 00:29:21.874 "name": "nvme_second", 00:29:21.874 "trtype": "tcp", 00:29:21.874 "traddr": "10.0.0.2", 00:29:21.874 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:21.874 "adrfam": "ipv4", 00:29:21.874 "trsvcid": "8009", 00:29:21.874 "wait_for_attach": true, 00:29:21.874 "method": "bdev_nvme_start_discovery", 00:29:21.874 "req_id": 1 00:29:21.874 } 00:29:21.874 Got JSON-RPC error response 00:29:21.874 response: 00:29:21.874 { 00:29:21.874 "code": -17, 00:29:21.874 "message": "File exists" 00:29:21.874 } 00:29:21.874 08:21:52 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:21.874 08:21:52 -- common/autotest_common.sh@643 -- # es=1 00:29:21.874 08:21:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:21.874 08:21:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:21.874 08:21:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:21.874 08:21:52 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:29:21.874 08:21:52 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:21.874 08:21:52 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:21.874 08:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.874 08:21:52 -- host/discovery.sh@67 -- # sort 00:29:21.874 08:21:52 -- common/autotest_common.sh@10 -- # set +x 00:29:21.874 08:21:52 -- host/discovery.sh@67 -- # xargs 00:29:21.874 08:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.874 08:21:52 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:29:21.874 08:21:52 -- host/discovery.sh@153 -- # get_bdev_list 00:29:21.874 08:21:52 -- host/discovery.sh@55 -- # xargs 00:29:21.875 08:21:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:21.875 08:21:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:21.875 08:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.875 08:21:52 -- host/discovery.sh@55 -- # sort 00:29:21.875 08:21:52 -- common/autotest_common.sh@10 -- # set +x 00:29:22.132 08:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:22.132 08:21:52 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:22.132 08:21:52 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:22.132 08:21:52 -- common/autotest_common.sh@640 -- # local es=0 00:29:22.132 08:21:52 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:22.132 08:21:52 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:22.132 08:21:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:22.132 08:21:52 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:22.133 08:21:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:22.133 08:21:52 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:22.133 08:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.133 08:21:52 -- common/autotest_common.sh@10 -- # set +x 00:29:23.068 [2024-06-11 08:21:53.571813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-06-11 08:21:53.572149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:23.068 [2024-06-11 08:21:53.572162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c3350 with addr=10.0.0.2, port=8010 00:29:23.068 [2024-06-11 08:21:53.572175] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:23.068 [2024-06-11 08:21:53.572183] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:23.068 [2024-06-11 08:21:53.572191] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:24.007 [2024-06-11 08:21:54.574115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.007 [2024-06-11 08:21:54.574427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.007 [2024-06-11 08:21:54.574444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6f2000 with addr=10.0.0.2, port=8010 00:29:24.007 [2024-06-11 08:21:54.574456] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:24.007 [2024-06-11 08:21:54.574463] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:24.007 [2024-06-11 08:21:54.574469] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:24.946 [2024-06-11 08:21:55.576136] bdev_nvme.c:6796:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:29:24.946 request: 00:29:24.946 { 00:29:24.946 "name": "nvme_second", 00:29:24.946 "trtype": "tcp", 00:29:24.946 "traddr": "10.0.0.2", 00:29:24.946 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:24.946 "adrfam": "ipv4", 00:29:24.946 "trsvcid": "8010", 00:29:24.946 "attach_timeout_ms": 3000, 00:29:24.946 "method": "bdev_nvme_start_discovery", 00:29:24.946 "req_id": 1 00:29:24.946 } 00:29:24.946 Got JSON-RPC error response 00:29:24.946 response: 00:29:24.946 { 00:29:24.946 "code": -110, 00:29:24.946 "message": "Connection timed out" 00:29:24.946 } 00:29:24.946 08:21:55 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:24.946 08:21:55 -- common/autotest_common.sh@643 -- # es=1 00:29:24.946 08:21:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:24.946 08:21:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:24.946 08:21:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:24.946 08:21:55 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:29:24.946 08:21:55 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:24.946 08:21:55 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:24.946 08:21:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.946 08:21:55 -- host/discovery.sh@67 -- # sort 00:29:24.946 08:21:55 -- common/autotest_common.sh@10 -- # set +x 00:29:24.946 08:21:55 -- host/discovery.sh@67 -- # xargs 00:29:25.207 08:21:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:25.207 08:21:55 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:29:25.207 08:21:55 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:29:25.207 08:21:55 -- host/discovery.sh@162 -- # kill 1227544 00:29:25.207 08:21:55 -- host/discovery.sh@163 -- # nvmftestfini 00:29:25.207 08:21:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:25.207 08:21:55 -- nvmf/common.sh@116 -- # sync 00:29:25.207 08:21:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:25.207 08:21:55 -- nvmf/common.sh@119 -- # set +e 00:29:25.207 08:21:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:25.207 08:21:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:25.207 rmmod nvme_tcp 00:29:25.207 rmmod nvme_fabrics 00:29:25.207 rmmod nvme_keyring 00:29:25.207 08:21:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:25.207 08:21:55 -- nvmf/common.sh@123 -- # set -e 00:29:25.207 08:21:55 -- nvmf/common.sh@124 -- # return 0 00:29:25.207 08:21:55 -- nvmf/common.sh@477 -- # '[' -n 1227303 ']' 00:29:25.207 08:21:55 -- nvmf/common.sh@478 -- # killprocess 1227303 00:29:25.207 08:21:55 -- common/autotest_common.sh@926 -- # '[' -z 1227303 ']' 00:29:25.207 08:21:55 -- common/autotest_common.sh@930 -- # kill -0 1227303 00:29:25.207 08:21:55 -- common/autotest_common.sh@931 -- # uname 00:29:25.207 08:21:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:25.207 08:21:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1227303 00:29:25.207 08:21:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:25.207 08:21:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:25.207 08:21:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1227303' 00:29:25.207 killing process with pid 1227303 00:29:25.207 08:21:55 -- common/autotest_common.sh@945 -- # kill 1227303 00:29:25.207 08:21:55 -- common/autotest_common.sh@950 -- # wait 1227303 00:29:25.468 08:21:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:25.468 08:21:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:25.468 08:21:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:25.468 08:21:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:25.468 08:21:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:25.468 08:21:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.468 08:21:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:25.468 08:21:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.424 08:21:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:27.424 00:29:27.424 real 0m22.428s 00:29:27.424 user 0m28.273s 00:29:27.424 sys 0m6.741s 00:29:27.424 08:21:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:27.424 08:21:57 -- common/autotest_common.sh@10 -- # set +x 00:29:27.424 ************************************ 00:29:27.424 END TEST nvmf_discovery 00:29:27.424 ************************************ 00:29:27.424 08:21:57 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:27.424 08:21:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:27.424 08:21:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:27.424 08:21:57 -- common/autotest_common.sh@10 -- # set +x 00:29:27.424 ************************************ 00:29:27.424 START TEST nvmf_discovery_remove_ifc 00:29:27.424 ************************************ 00:29:27.424 08:21:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:27.686 * Looking for test storage... 00:29:27.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:27.686 08:21:58 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:27.686 08:21:58 -- nvmf/common.sh@7 -- # uname -s 00:29:27.686 08:21:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.686 08:21:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.686 08:21:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.686 08:21:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.686 08:21:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.686 08:21:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.686 08:21:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.686 08:21:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.686 08:21:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.686 08:21:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.686 08:21:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:27.686 08:21:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:27.686 08:21:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.686 08:21:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.686 08:21:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:27.686 08:21:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:27.686 08:21:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.686 08:21:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.686 08:21:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.686 08:21:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.686 08:21:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.686 08:21:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.686 08:21:58 -- paths/export.sh@5 -- # export PATH 00:29:27.686 08:21:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.686 08:21:58 -- nvmf/common.sh@46 -- # : 0 00:29:27.686 08:21:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:27.686 08:21:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:27.686 08:21:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:27.686 08:21:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.686 08:21:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.686 08:21:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:27.686 08:21:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:27.686 08:21:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:27.686 08:21:58 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:29:27.686 08:21:58 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:29:27.686 08:21:58 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:29:27.686 08:21:58 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:29:27.686 08:21:58 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:29:27.686 08:21:58 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:29:27.686 08:21:58 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:29:27.686 08:21:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:27.686 08:21:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.686 08:21:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:27.686 08:21:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:27.686 08:21:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:27.686 08:21:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.686 08:21:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:27.686 08:21:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.686 08:21:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:27.686 08:21:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:27.686 08:21:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:27.686 08:21:58 -- common/autotest_common.sh@10 -- # set +x 00:29:35.831 08:22:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:35.831 08:22:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:35.831 08:22:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:35.831 08:22:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:35.831 08:22:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:35.831 08:22:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:35.831 08:22:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:35.831 08:22:05 -- nvmf/common.sh@294 -- # net_devs=() 00:29:35.831 08:22:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:35.831 08:22:05 -- nvmf/common.sh@295 -- # e810=() 00:29:35.831 08:22:05 -- nvmf/common.sh@295 -- # local -ga e810 00:29:35.831 08:22:05 -- nvmf/common.sh@296 -- # x722=() 00:29:35.831 08:22:05 -- nvmf/common.sh@296 -- # local -ga x722 00:29:35.831 08:22:05 -- nvmf/common.sh@297 -- # mlx=() 00:29:35.831 08:22:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:35.831 08:22:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:35.831 08:22:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:35.831 08:22:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:35.831 08:22:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:35.831 08:22:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:35.831 08:22:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:35.831 08:22:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:35.831 08:22:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:35.831 08:22:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:35.831 08:22:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:35.831 08:22:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:35.831 08:22:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:35.831 08:22:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:35.831 08:22:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:35.831 08:22:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:35.831 08:22:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:35.831 08:22:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:35.831 08:22:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:35.831 08:22:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:35.831 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:35.831 08:22:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:35.831 08:22:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:35.831 08:22:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.831 08:22:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.831 08:22:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:35.831 08:22:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:35.831 08:22:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:35.831 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:35.831 08:22:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:35.831 08:22:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:35.831 08:22:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.831 08:22:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.831 08:22:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:35.831 08:22:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:35.831 08:22:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:35.831 08:22:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:35.831 08:22:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:35.831 08:22:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.831 08:22:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:35.831 08:22:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.831 08:22:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:35.831 Found net devices under 0000:31:00.0: cvl_0_0 00:29:35.831 08:22:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.831 08:22:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:35.831 08:22:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.831 08:22:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:35.831 08:22:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.831 08:22:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:35.831 Found net devices under 0000:31:00.1: cvl_0_1 00:29:35.831 08:22:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.831 08:22:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:35.831 08:22:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:35.831 08:22:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:35.831 08:22:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:35.831 08:22:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:35.831 08:22:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:35.831 08:22:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:35.831 08:22:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:35.831 08:22:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:35.831 08:22:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:35.831 08:22:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:35.831 08:22:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:35.831 08:22:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:35.831 08:22:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:35.831 08:22:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:35.831 08:22:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:35.831 08:22:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:35.831 08:22:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:35.831 08:22:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:35.831 08:22:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:35.832 08:22:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:35.832 08:22:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:35.832 08:22:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:35.832 08:22:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:35.832 08:22:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:35.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:35.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:29:35.832 00:29:35.832 --- 10.0.0.2 ping statistics --- 00:29:35.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.832 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:29:35.832 08:22:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:35.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:35.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:29:35.832 00:29:35.832 --- 10.0.0.1 ping statistics --- 00:29:35.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.832 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:29:35.832 08:22:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:35.832 08:22:05 -- nvmf/common.sh@410 -- # return 0 00:29:35.832 08:22:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:35.832 08:22:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:35.832 08:22:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:35.832 08:22:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:35.832 08:22:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:35.832 08:22:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:35.832 08:22:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:35.832 08:22:05 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:29:35.832 08:22:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:35.832 08:22:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:35.832 08:22:05 -- common/autotest_common.sh@10 -- # set +x 00:29:35.832 08:22:05 -- nvmf/common.sh@469 -- # nvmfpid=1234253 00:29:35.832 08:22:05 -- nvmf/common.sh@470 -- # waitforlisten 1234253 00:29:35.832 08:22:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:35.832 08:22:05 -- common/autotest_common.sh@819 -- # '[' -z 1234253 ']' 00:29:35.832 08:22:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.832 08:22:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:35.832 08:22:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.832 08:22:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:35.832 08:22:05 -- common/autotest_common.sh@10 -- # set +x 00:29:35.832 [2024-06-11 08:22:05.388394] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:35.832 [2024-06-11 08:22:05.388458] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:35.832 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.832 [2024-06-11 08:22:05.476647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.832 [2024-06-11 08:22:05.566305] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:35.832 [2024-06-11 08:22:05.566467] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.832 [2024-06-11 08:22:05.566479] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.832 [2024-06-11 08:22:05.566487] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.832 [2024-06-11 08:22:05.566520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.832 08:22:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:35.832 08:22:06 -- common/autotest_common.sh@852 -- # return 0 00:29:35.832 08:22:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:35.832 08:22:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:35.832 08:22:06 -- common/autotest_common.sh@10 -- # set +x 00:29:35.832 08:22:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:35.832 08:22:06 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:29:35.832 08:22:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:35.832 08:22:06 -- common/autotest_common.sh@10 -- # set +x 00:29:35.832 [2024-06-11 08:22:06.249539] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.832 [2024-06-11 08:22:06.257673] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:35.832 null0 00:29:35.832 [2024-06-11 08:22:06.289699] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.832 08:22:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:35.832 08:22:06 -- host/discovery_remove_ifc.sh@59 -- # hostpid=1234292 00:29:35.832 08:22:06 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1234292 /tmp/host.sock 00:29:35.832 08:22:06 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:29:35.832 08:22:06 -- common/autotest_common.sh@819 -- # '[' -z 1234292 ']' 00:29:35.832 08:22:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:29:35.832 08:22:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:35.832 08:22:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:35.832 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:35.832 08:22:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:35.832 08:22:06 -- common/autotest_common.sh@10 -- # set +x 00:29:35.832 [2024-06-11 08:22:06.364420] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:35.832 [2024-06-11 08:22:06.364522] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234292 ] 00:29:35.832 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.832 [2024-06-11 08:22:06.428220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.092 [2024-06-11 08:22:06.490540] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:36.092 [2024-06-11 08:22:06.490675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.092 08:22:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:36.092 08:22:06 -- common/autotest_common.sh@852 -- # return 0 00:29:36.092 08:22:06 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:36.092 08:22:06 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:29:36.092 08:22:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.092 08:22:06 -- common/autotest_common.sh@10 -- # set +x 00:29:36.092 08:22:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.092 08:22:06 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:29:36.092 08:22:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.092 08:22:06 -- common/autotest_common.sh@10 -- # set +x 00:29:36.092 08:22:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.092 08:22:06 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:29:36.092 08:22:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.092 08:22:06 -- common/autotest_common.sh@10 -- # set +x 00:29:37.034 [2024-06-11 08:22:07.677671] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:37.034 [2024-06-11 08:22:07.677691] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:37.034 [2024-06-11 08:22:07.677704] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:37.294 [2024-06-11 08:22:07.765995] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:37.554 [2024-06-11 08:22:07.992557] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:37.554 [2024-06-11 08:22:07.992599] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:37.554 [2024-06-11 08:22:07.992620] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:37.554 [2024-06-11 08:22:07.992634] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:37.554 [2024-06-11 08:22:07.992653] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:37.554 08:22:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.554 08:22:07 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:29:37.555 [2024-06-11 08:22:07.995624] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1347180 was disconnected and freed. delete nvme_qpair. 00:29:37.555 08:22:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:37.555 08:22:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:37.555 08:22:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:37.555 08:22:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.555 08:22:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:37.555 08:22:07 -- common/autotest_common.sh@10 -- # set +x 00:29:37.555 08:22:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:37.555 08:22:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.555 08:22:08 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:29:37.555 08:22:08 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:29:37.555 08:22:08 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:29:37.555 08:22:08 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:29:37.555 08:22:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:37.555 08:22:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:37.555 08:22:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:37.555 08:22:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.555 08:22:08 -- common/autotest_common.sh@10 -- # set +x 00:29:37.555 08:22:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:37.555 08:22:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:37.815 08:22:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.815 08:22:08 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:37.815 08:22:08 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:38.790 08:22:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:38.790 08:22:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:38.790 08:22:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:38.790 08:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.790 08:22:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:38.790 08:22:09 -- common/autotest_common.sh@10 -- # set +x 00:29:38.790 08:22:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:38.790 08:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.790 08:22:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:38.790 08:22:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:39.773 08:22:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:39.773 08:22:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:39.773 08:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:39.773 08:22:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:39.773 08:22:10 -- common/autotest_common.sh@10 -- # set +x 00:29:39.773 08:22:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:39.773 08:22:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:39.773 08:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:39.773 08:22:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:39.773 08:22:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:40.712 08:22:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:40.712 08:22:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:40.712 08:22:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:40.712 08:22:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.712 08:22:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:40.712 08:22:11 -- common/autotest_common.sh@10 -- # set +x 00:29:40.712 08:22:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:40.972 08:22:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.972 08:22:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:40.972 08:22:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:41.912 08:22:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:41.912 08:22:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:41.912 08:22:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:41.912 08:22:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:41.912 08:22:12 -- common/autotest_common.sh@10 -- # set +x 00:29:41.912 08:22:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:41.912 08:22:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:41.912 08:22:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:41.912 08:22:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:41.912 08:22:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:42.853 [2024-06-11 08:22:13.433415] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:29:42.853 [2024-06-11 08:22:13.433461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.853 [2024-06-11 08:22:13.433472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.853 [2024-06-11 08:22:13.433483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.853 [2024-06-11 08:22:13.433491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.853 [2024-06-11 08:22:13.433499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.853 [2024-06-11 08:22:13.433506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.853 [2024-06-11 08:22:13.433519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.853 [2024-06-11 08:22:13.433526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.853 [2024-06-11 08:22:13.433534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.853 [2024-06-11 08:22:13.433541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.853 [2024-06-11 08:22:13.433548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130d7a0 is same with the state(5) to be set 00:29:42.853 [2024-06-11 08:22:13.443436] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130d7a0 (9): Bad file descriptor 00:29:42.853 08:22:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:42.853 08:22:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:42.853 08:22:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:42.853 08:22:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:42.853 08:22:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:42.853 08:22:13 -- common/autotest_common.sh@10 -- # set +x 00:29:42.853 08:22:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:42.853 [2024-06-11 08:22:13.453478] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:44.236 [2024-06-11 08:22:14.513498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:45.178 [2024-06-11 08:22:15.537474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:45.178 [2024-06-11 08:22:15.537513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x130d7a0 with addr=10.0.0.2, port=4420 00:29:45.178 [2024-06-11 08:22:15.537525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x130d7a0 is same with the state(5) to be set 00:29:45.178 [2024-06-11 08:22:15.537856] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130d7a0 (9): Bad file descriptor 00:29:45.179 [2024-06-11 08:22:15.537878] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.179 [2024-06-11 08:22:15.537900] bdev_nvme.c:6504:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:29:45.179 [2024-06-11 08:22:15.537922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.179 [2024-06-11 08:22:15.537932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.179 [2024-06-11 08:22:15.537942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.179 [2024-06-11 08:22:15.537950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.179 [2024-06-11 08:22:15.537958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.179 [2024-06-11 08:22:15.537965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.179 [2024-06-11 08:22:15.537973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.179 [2024-06-11 08:22:15.537981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.179 [2024-06-11 08:22:15.537989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.179 [2024-06-11 08:22:15.537997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.179 [2024-06-11 08:22:15.538009] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:29:45.179 [2024-06-11 08:22:15.538546] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x130dbb0 (9): Bad file descriptor 00:29:45.179 [2024-06-11 08:22:15.539557] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:29:45.179 [2024-06-11 08:22:15.539570] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:29:45.179 08:22:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.179 08:22:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:45.179 08:22:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:46.120 08:22:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:46.120 08:22:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:46.120 08:22:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:46.120 08:22:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.120 08:22:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:46.120 08:22:16 -- common/autotest_common.sh@10 -- # set +x 00:29:46.120 08:22:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:46.120 08:22:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:46.120 08:22:16 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:29:46.120 08:22:16 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:46.120 08:22:16 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:46.120 08:22:16 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:29:46.120 08:22:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:46.120 08:22:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:46.120 08:22:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:46.120 08:22:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.120 08:22:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:46.120 08:22:16 -- common/autotest_common.sh@10 -- # set +x 00:29:46.120 08:22:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:46.120 08:22:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:46.380 08:22:16 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:46.380 08:22:16 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:46.951 [2024-06-11 08:22:17.593654] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:46.951 [2024-06-11 08:22:17.593673] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:46.951 [2024-06-11 08:22:17.593687] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:47.212 [2024-06-11 08:22:17.680965] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:29:47.212 08:22:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:47.212 08:22:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:47.212 08:22:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:47.212 08:22:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.212 08:22:17 -- common/autotest_common.sh@10 -- # set +x 00:29:47.212 08:22:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:47.212 08:22:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:47.212 08:22:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.212 08:22:17 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:47.212 08:22:17 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:47.472 [2024-06-11 08:22:17.905167] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:47.472 [2024-06-11 08:22:17.905207] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:47.472 [2024-06-11 08:22:17.905227] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:47.472 [2024-06-11 08:22:17.905242] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:29:47.472 [2024-06-11 08:22:17.905250] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:47.472 [2024-06-11 08:22:17.950035] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1350b90 was disconnected and freed. delete nvme_qpair. 00:29:48.414 08:22:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:48.414 08:22:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:48.414 08:22:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:48.414 08:22:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:48.414 08:22:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:48.414 08:22:18 -- common/autotest_common.sh@10 -- # set +x 00:29:48.414 08:22:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:48.414 08:22:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:48.414 08:22:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:29:48.414 08:22:18 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:29:48.414 08:22:18 -- host/discovery_remove_ifc.sh@90 -- # killprocess 1234292 00:29:48.414 08:22:18 -- common/autotest_common.sh@926 -- # '[' -z 1234292 ']' 00:29:48.414 08:22:18 -- common/autotest_common.sh@930 -- # kill -0 1234292 00:29:48.414 08:22:18 -- common/autotest_common.sh@931 -- # uname 00:29:48.414 08:22:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:48.414 08:22:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1234292 00:29:48.414 08:22:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:48.414 08:22:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:48.414 08:22:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1234292' 00:29:48.414 killing process with pid 1234292 00:29:48.414 08:22:18 -- common/autotest_common.sh@945 -- # kill 1234292 00:29:48.414 08:22:18 -- common/autotest_common.sh@950 -- # wait 1234292 00:29:48.674 08:22:19 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:29:48.674 08:22:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:48.674 08:22:19 -- nvmf/common.sh@116 -- # sync 00:29:48.674 08:22:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:48.674 08:22:19 -- nvmf/common.sh@119 -- # set +e 00:29:48.674 08:22:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:48.674 08:22:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:48.674 rmmod nvme_tcp 00:29:48.674 rmmod nvme_fabrics 00:29:48.674 rmmod nvme_keyring 00:29:48.674 08:22:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:48.674 08:22:19 -- nvmf/common.sh@123 -- # set -e 00:29:48.674 08:22:19 -- nvmf/common.sh@124 -- # return 0 00:29:48.674 08:22:19 -- nvmf/common.sh@477 -- # '[' -n 1234253 ']' 00:29:48.674 08:22:19 -- nvmf/common.sh@478 -- # killprocess 1234253 00:29:48.674 08:22:19 -- common/autotest_common.sh@926 -- # '[' -z 1234253 ']' 00:29:48.674 08:22:19 -- common/autotest_common.sh@930 -- # kill -0 1234253 00:29:48.674 08:22:19 -- common/autotest_common.sh@931 -- # uname 00:29:48.674 08:22:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:48.674 08:22:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1234253 00:29:48.674 08:22:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:48.674 08:22:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:48.674 08:22:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1234253' 00:29:48.674 killing process with pid 1234253 00:29:48.674 08:22:19 -- common/autotest_common.sh@945 -- # kill 1234253 00:29:48.674 08:22:19 -- common/autotest_common.sh@950 -- # wait 1234253 00:29:48.674 08:22:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:48.674 08:22:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:48.674 08:22:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:48.674 08:22:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:48.674 08:22:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:48.674 08:22:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.674 08:22:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:48.674 08:22:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.219 08:22:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:51.219 00:29:51.219 real 0m23.385s 00:29:51.219 user 0m27.174s 00:29:51.219 sys 0m6.429s 00:29:51.219 08:22:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:51.219 08:22:21 -- common/autotest_common.sh@10 -- # set +x 00:29:51.219 ************************************ 00:29:51.219 END TEST nvmf_discovery_remove_ifc 00:29:51.219 ************************************ 00:29:51.219 08:22:21 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:29:51.219 08:22:21 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:51.219 08:22:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:51.219 08:22:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:51.219 08:22:21 -- common/autotest_common.sh@10 -- # set +x 00:29:51.219 ************************************ 00:29:51.219 START TEST nvmf_digest 00:29:51.219 ************************************ 00:29:51.219 08:22:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:51.219 * Looking for test storage... 00:29:51.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:51.219 08:22:21 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:51.219 08:22:21 -- nvmf/common.sh@7 -- # uname -s 00:29:51.219 08:22:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:51.219 08:22:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:51.219 08:22:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:51.219 08:22:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:51.219 08:22:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:51.219 08:22:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:51.219 08:22:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:51.219 08:22:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:51.219 08:22:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:51.219 08:22:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:51.219 08:22:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:51.219 08:22:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:51.219 08:22:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:51.219 08:22:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:51.219 08:22:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:51.219 08:22:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:51.219 08:22:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:51.219 08:22:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:51.219 08:22:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:51.219 08:22:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.219 08:22:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.219 08:22:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.219 08:22:21 -- paths/export.sh@5 -- # export PATH 00:29:51.219 08:22:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.219 08:22:21 -- nvmf/common.sh@46 -- # : 0 00:29:51.219 08:22:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:51.219 08:22:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:51.219 08:22:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:51.219 08:22:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:51.219 08:22:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:51.219 08:22:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:51.219 08:22:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:51.219 08:22:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:51.219 08:22:21 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:51.219 08:22:21 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:51.219 08:22:21 -- host/digest.sh@16 -- # runtime=2 00:29:51.219 08:22:21 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:29:51.219 08:22:21 -- host/digest.sh@132 -- # nvmftestinit 00:29:51.219 08:22:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:51.219 08:22:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:51.219 08:22:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:51.219 08:22:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:51.219 08:22:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:51.219 08:22:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.219 08:22:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:51.219 08:22:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.219 08:22:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:51.219 08:22:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:51.219 08:22:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:51.219 08:22:21 -- common/autotest_common.sh@10 -- # set +x 00:29:57.808 08:22:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:57.808 08:22:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:57.808 08:22:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:57.808 08:22:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:57.808 08:22:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:57.808 08:22:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:57.808 08:22:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:57.808 08:22:28 -- nvmf/common.sh@294 -- # net_devs=() 00:29:57.808 08:22:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:57.808 08:22:28 -- nvmf/common.sh@295 -- # e810=() 00:29:57.808 08:22:28 -- nvmf/common.sh@295 -- # local -ga e810 00:29:57.808 08:22:28 -- nvmf/common.sh@296 -- # x722=() 00:29:57.808 08:22:28 -- nvmf/common.sh@296 -- # local -ga x722 00:29:57.808 08:22:28 -- nvmf/common.sh@297 -- # mlx=() 00:29:57.808 08:22:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:57.808 08:22:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:57.808 08:22:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:57.808 08:22:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:57.808 08:22:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:57.808 08:22:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:57.808 08:22:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:57.808 08:22:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:57.808 08:22:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:57.808 08:22:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:57.808 08:22:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:57.808 08:22:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:57.808 08:22:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:57.808 08:22:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:57.808 08:22:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:57.808 08:22:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:57.808 08:22:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:57.808 08:22:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:57.808 08:22:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:57.808 08:22:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:57.808 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:57.808 08:22:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:57.808 08:22:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:57.808 08:22:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.808 08:22:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.808 08:22:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:57.808 08:22:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:57.808 08:22:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:57.808 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:57.808 08:22:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:57.808 08:22:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:57.808 08:22:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.808 08:22:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.808 08:22:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:57.808 08:22:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:57.808 08:22:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:57.808 08:22:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:57.808 08:22:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:57.808 08:22:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.808 08:22:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:57.808 08:22:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.808 08:22:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:57.808 Found net devices under 0000:31:00.0: cvl_0_0 00:29:57.808 08:22:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.808 08:22:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:57.808 08:22:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.808 08:22:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:57.808 08:22:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.808 08:22:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:57.808 Found net devices under 0000:31:00.1: cvl_0_1 00:29:57.808 08:22:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.808 08:22:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:57.808 08:22:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:57.808 08:22:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:57.808 08:22:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:57.808 08:22:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:57.808 08:22:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:57.808 08:22:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:57.808 08:22:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:57.808 08:22:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:57.808 08:22:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:57.808 08:22:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:57.808 08:22:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:57.808 08:22:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:57.808 08:22:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:57.808 08:22:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:57.808 08:22:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:57.808 08:22:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:57.808 08:22:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:57.808 08:22:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:57.808 08:22:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:57.808 08:22:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:57.808 08:22:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:58.070 08:22:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:58.070 08:22:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:58.070 08:22:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:58.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:58.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:29:58.070 00:29:58.070 --- 10.0.0.2 ping statistics --- 00:29:58.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.070 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:29:58.070 08:22:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:58.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:58.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:29:58.070 00:29:58.070 --- 10.0.0.1 ping statistics --- 00:29:58.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:58.070 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:29:58.070 08:22:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:58.070 08:22:28 -- nvmf/common.sh@410 -- # return 0 00:29:58.070 08:22:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:58.070 08:22:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:58.070 08:22:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:58.070 08:22:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:58.070 08:22:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:58.070 08:22:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:58.070 08:22:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:58.070 08:22:28 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:58.070 08:22:28 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:29:58.070 08:22:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:58.070 08:22:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:58.070 08:22:28 -- common/autotest_common.sh@10 -- # set +x 00:29:58.070 ************************************ 00:29:58.070 START TEST nvmf_digest_clean 00:29:58.070 ************************************ 00:29:58.070 08:22:28 -- common/autotest_common.sh@1104 -- # run_digest 00:29:58.070 08:22:28 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:29:58.070 08:22:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:58.070 08:22:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:58.070 08:22:28 -- common/autotest_common.sh@10 -- # set +x 00:29:58.070 08:22:28 -- nvmf/common.sh@469 -- # nvmfpid=1241136 00:29:58.070 08:22:28 -- nvmf/common.sh@470 -- # waitforlisten 1241136 00:29:58.070 08:22:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:58.070 08:22:28 -- common/autotest_common.sh@819 -- # '[' -z 1241136 ']' 00:29:58.070 08:22:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.070 08:22:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:58.070 08:22:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.070 08:22:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:58.070 08:22:28 -- common/autotest_common.sh@10 -- # set +x 00:29:58.070 [2024-06-11 08:22:28.669570] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:58.070 [2024-06-11 08:22:28.669617] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:58.070 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.329 [2024-06-11 08:22:28.735390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.329 [2024-06-11 08:22:28.797465] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:58.329 [2024-06-11 08:22:28.797589] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.329 [2024-06-11 08:22:28.797598] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.329 [2024-06-11 08:22:28.797604] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.329 [2024-06-11 08:22:28.797624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.899 08:22:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:58.899 08:22:29 -- common/autotest_common.sh@852 -- # return 0 00:29:58.899 08:22:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:58.899 08:22:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:58.899 08:22:29 -- common/autotest_common.sh@10 -- # set +x 00:29:58.899 08:22:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.899 08:22:29 -- host/digest.sh@120 -- # common_target_config 00:29:58.899 08:22:29 -- host/digest.sh@43 -- # rpc_cmd 00:29:58.899 08:22:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:58.899 08:22:29 -- common/autotest_common.sh@10 -- # set +x 00:29:58.899 null0 00:29:58.899 [2024-06-11 08:22:29.544733] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.159 [2024-06-11 08:22:29.568914] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.159 08:22:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:59.160 08:22:29 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:29:59.160 08:22:29 -- host/digest.sh@77 -- # local rw bs qd 00:29:59.160 08:22:29 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:59.160 08:22:29 -- host/digest.sh@80 -- # rw=randread 00:29:59.160 08:22:29 -- host/digest.sh@80 -- # bs=4096 00:29:59.160 08:22:29 -- host/digest.sh@80 -- # qd=128 00:29:59.160 08:22:29 -- host/digest.sh@82 -- # bperfpid=1241302 00:29:59.160 08:22:29 -- host/digest.sh@83 -- # waitforlisten 1241302 /var/tmp/bperf.sock 00:29:59.160 08:22:29 -- common/autotest_common.sh@819 -- # '[' -z 1241302 ']' 00:29:59.160 08:22:29 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:59.160 08:22:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:59.160 08:22:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:59.160 08:22:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:59.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:59.160 08:22:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:59.160 08:22:29 -- common/autotest_common.sh@10 -- # set +x 00:29:59.160 [2024-06-11 08:22:29.621150] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:59.160 [2024-06-11 08:22:29.621196] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1241302 ] 00:29:59.160 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.160 [2024-06-11 08:22:29.697016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.160 [2024-06-11 08:22:29.749281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.731 08:22:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:59.731 08:22:30 -- common/autotest_common.sh@852 -- # return 0 00:29:59.731 08:22:30 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:59.731 08:22:30 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:59.731 08:22:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:59.992 08:22:30 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:59.993 08:22:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:00.564 nvme0n1 00:30:00.564 08:22:30 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:00.564 08:22:30 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:00.564 Running I/O for 2 seconds... 00:30:02.478 00:30:02.478 Latency(us) 00:30:02.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:02.478 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:02.478 nvme0n1 : 2.01 17232.06 67.31 0.00 0.00 7423.47 1897.81 16930.13 00:30:02.478 =================================================================================================================== 00:30:02.478 Total : 17232.06 67.31 0.00 0.00 7423.47 1897.81 16930.13 00:30:02.478 0 00:30:02.478 08:22:33 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:02.478 08:22:33 -- host/digest.sh@92 -- # get_accel_stats 00:30:02.478 08:22:33 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:02.478 08:22:33 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:02.478 | select(.opcode=="crc32c") 00:30:02.478 | "\(.module_name) \(.executed)"' 00:30:02.478 08:22:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:02.739 08:22:33 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:02.739 08:22:33 -- host/digest.sh@93 -- # exp_module=software 00:30:02.739 08:22:33 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:02.739 08:22:33 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:02.739 08:22:33 -- host/digest.sh@97 -- # killprocess 1241302 00:30:02.739 08:22:33 -- common/autotest_common.sh@926 -- # '[' -z 1241302 ']' 00:30:02.739 08:22:33 -- common/autotest_common.sh@930 -- # kill -0 1241302 00:30:02.739 08:22:33 -- common/autotest_common.sh@931 -- # uname 00:30:02.739 08:22:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:02.739 08:22:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1241302 00:30:02.739 08:22:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:02.739 08:22:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:02.739 08:22:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1241302' 00:30:02.739 killing process with pid 1241302 00:30:02.739 08:22:33 -- common/autotest_common.sh@945 -- # kill 1241302 00:30:02.739 Received shutdown signal, test time was about 2.000000 seconds 00:30:02.739 00:30:02.739 Latency(us) 00:30:02.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:02.739 =================================================================================================================== 00:30:02.739 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:02.739 08:22:33 -- common/autotest_common.sh@950 -- # wait 1241302 00:30:03.000 08:22:33 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:30:03.000 08:22:33 -- host/digest.sh@77 -- # local rw bs qd 00:30:03.000 08:22:33 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:03.000 08:22:33 -- host/digest.sh@80 -- # rw=randread 00:30:03.000 08:22:33 -- host/digest.sh@80 -- # bs=131072 00:30:03.000 08:22:33 -- host/digest.sh@80 -- # qd=16 00:30:03.000 08:22:33 -- host/digest.sh@82 -- # bperfpid=1242123 00:30:03.000 08:22:33 -- host/digest.sh@83 -- # waitforlisten 1242123 /var/tmp/bperf.sock 00:30:03.000 08:22:33 -- common/autotest_common.sh@819 -- # '[' -z 1242123 ']' 00:30:03.000 08:22:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:03.000 08:22:33 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:03.000 08:22:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:03.000 08:22:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:03.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:03.000 08:22:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:03.000 08:22:33 -- common/autotest_common.sh@10 -- # set +x 00:30:03.000 [2024-06-11 08:22:33.467449] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:03.000 [2024-06-11 08:22:33.467516] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1242123 ] 00:30:03.000 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:03.000 Zero copy mechanism will not be used. 00:30:03.000 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.000 [2024-06-11 08:22:33.544666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.000 [2024-06-11 08:22:33.595904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.571 08:22:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:03.571 08:22:34 -- common/autotest_common.sh@852 -- # return 0 00:30:03.571 08:22:34 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:03.571 08:22:34 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:03.571 08:22:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:03.832 08:22:34 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:03.833 08:22:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:04.406 nvme0n1 00:30:04.406 08:22:34 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:04.406 08:22:34 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:04.406 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:04.406 Zero copy mechanism will not be used. 00:30:04.406 Running I/O for 2 seconds... 00:30:06.319 00:30:06.319 Latency(us) 00:30:06.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.319 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:06.319 nvme0n1 : 2.00 4693.86 586.73 0.00 0.00 3404.70 658.77 8628.91 00:30:06.319 =================================================================================================================== 00:30:06.319 Total : 4693.86 586.73 0.00 0.00 3404.70 658.77 8628.91 00:30:06.319 0 00:30:06.319 08:22:36 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:06.319 08:22:36 -- host/digest.sh@92 -- # get_accel_stats 00:30:06.319 08:22:36 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:06.319 08:22:36 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:06.319 | select(.opcode=="crc32c") 00:30:06.319 | "\(.module_name) \(.executed)"' 00:30:06.319 08:22:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:06.579 08:22:37 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:06.579 08:22:37 -- host/digest.sh@93 -- # exp_module=software 00:30:06.579 08:22:37 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:06.580 08:22:37 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:06.580 08:22:37 -- host/digest.sh@97 -- # killprocess 1242123 00:30:06.580 08:22:37 -- common/autotest_common.sh@926 -- # '[' -z 1242123 ']' 00:30:06.580 08:22:37 -- common/autotest_common.sh@930 -- # kill -0 1242123 00:30:06.580 08:22:37 -- common/autotest_common.sh@931 -- # uname 00:30:06.580 08:22:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:06.580 08:22:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1242123 00:30:06.580 08:22:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:06.580 08:22:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:06.580 08:22:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1242123' 00:30:06.580 killing process with pid 1242123 00:30:06.580 08:22:37 -- common/autotest_common.sh@945 -- # kill 1242123 00:30:06.580 Received shutdown signal, test time was about 2.000000 seconds 00:30:06.580 00:30:06.580 Latency(us) 00:30:06.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.580 =================================================================================================================== 00:30:06.580 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:06.580 08:22:37 -- common/autotest_common.sh@950 -- # wait 1242123 00:30:06.580 08:22:37 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:30:06.580 08:22:37 -- host/digest.sh@77 -- # local rw bs qd 00:30:06.580 08:22:37 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:06.580 08:22:37 -- host/digest.sh@80 -- # rw=randwrite 00:30:06.580 08:22:37 -- host/digest.sh@80 -- # bs=4096 00:30:06.580 08:22:37 -- host/digest.sh@80 -- # qd=128 00:30:06.580 08:22:37 -- host/digest.sh@82 -- # bperfpid=1242871 00:30:06.580 08:22:37 -- host/digest.sh@83 -- # waitforlisten 1242871 /var/tmp/bperf.sock 00:30:06.580 08:22:37 -- common/autotest_common.sh@819 -- # '[' -z 1242871 ']' 00:30:06.580 08:22:37 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:06.580 08:22:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:06.580 08:22:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:06.580 08:22:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:06.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:06.580 08:22:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:06.580 08:22:37 -- common/autotest_common.sh@10 -- # set +x 00:30:06.841 [2024-06-11 08:22:37.257344] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:06.841 [2024-06-11 08:22:37.257399] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1242871 ] 00:30:06.841 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.841 [2024-06-11 08:22:37.331758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.841 [2024-06-11 08:22:37.383431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:07.412 08:22:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:07.412 08:22:38 -- common/autotest_common.sh@852 -- # return 0 00:30:07.412 08:22:38 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:07.412 08:22:38 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:07.412 08:22:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:07.673 08:22:38 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:07.673 08:22:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:07.934 nvme0n1 00:30:07.934 08:22:38 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:07.934 08:22:38 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:07.934 Running I/O for 2 seconds... 00:30:10.483 00:30:10.483 Latency(us) 00:30:10.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.483 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:10.483 nvme0n1 : 2.01 22588.71 88.24 0.00 0.00 5661.61 4014.08 15291.73 00:30:10.483 =================================================================================================================== 00:30:10.483 Total : 22588.71 88.24 0.00 0.00 5661.61 4014.08 15291.73 00:30:10.483 0 00:30:10.483 08:22:40 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:10.483 08:22:40 -- host/digest.sh@92 -- # get_accel_stats 00:30:10.483 08:22:40 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:10.483 08:22:40 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:10.483 | select(.opcode=="crc32c") 00:30:10.483 | "\(.module_name) \(.executed)"' 00:30:10.483 08:22:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:10.483 08:22:40 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:10.483 08:22:40 -- host/digest.sh@93 -- # exp_module=software 00:30:10.483 08:22:40 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:10.483 08:22:40 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:10.483 08:22:40 -- host/digest.sh@97 -- # killprocess 1242871 00:30:10.483 08:22:40 -- common/autotest_common.sh@926 -- # '[' -z 1242871 ']' 00:30:10.483 08:22:40 -- common/autotest_common.sh@930 -- # kill -0 1242871 00:30:10.483 08:22:40 -- common/autotest_common.sh@931 -- # uname 00:30:10.483 08:22:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:10.483 08:22:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1242871 00:30:10.483 08:22:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:10.483 08:22:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:10.483 08:22:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1242871' 00:30:10.483 killing process with pid 1242871 00:30:10.483 08:22:40 -- common/autotest_common.sh@945 -- # kill 1242871 00:30:10.483 Received shutdown signal, test time was about 2.000000 seconds 00:30:10.483 00:30:10.483 Latency(us) 00:30:10.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.483 =================================================================================================================== 00:30:10.483 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:10.483 08:22:40 -- common/autotest_common.sh@950 -- # wait 1242871 00:30:10.483 08:22:40 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:30:10.483 08:22:40 -- host/digest.sh@77 -- # local rw bs qd 00:30:10.483 08:22:40 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:10.483 08:22:40 -- host/digest.sh@80 -- # rw=randwrite 00:30:10.483 08:22:40 -- host/digest.sh@80 -- # bs=131072 00:30:10.483 08:22:40 -- host/digest.sh@80 -- # qd=16 00:30:10.483 08:22:40 -- host/digest.sh@82 -- # bperfpid=1243567 00:30:10.483 08:22:40 -- host/digest.sh@83 -- # waitforlisten 1243567 /var/tmp/bperf.sock 00:30:10.483 08:22:40 -- common/autotest_common.sh@819 -- # '[' -z 1243567 ']' 00:30:10.483 08:22:40 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:10.483 08:22:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:10.483 08:22:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:10.483 08:22:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:10.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:10.483 08:22:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:10.483 08:22:40 -- common/autotest_common.sh@10 -- # set +x 00:30:10.483 [2024-06-11 08:22:40.975510] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:10.483 [2024-06-11 08:22:40.975581] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243567 ] 00:30:10.483 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:10.483 Zero copy mechanism will not be used. 00:30:10.483 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.483 [2024-06-11 08:22:41.051585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.483 [2024-06-11 08:22:41.102854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.424 08:22:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:11.424 08:22:41 -- common/autotest_common.sh@852 -- # return 0 00:30:11.424 08:22:41 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:11.424 08:22:41 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:11.424 08:22:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:11.424 08:22:41 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:11.424 08:22:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:11.703 nvme0n1 00:30:11.703 08:22:42 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:11.703 08:22:42 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:11.964 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:11.964 Zero copy mechanism will not be used. 00:30:11.964 Running I/O for 2 seconds... 00:30:13.909 00:30:13.910 Latency(us) 00:30:13.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.910 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:13.910 nvme0n1 : 2.00 4534.52 566.81 0.00 0.00 3524.02 1536.00 7482.03 00:30:13.910 =================================================================================================================== 00:30:13.910 Total : 4534.52 566.81 0.00 0.00 3524.02 1536.00 7482.03 00:30:13.910 0 00:30:13.910 08:22:44 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:13.910 08:22:44 -- host/digest.sh@92 -- # get_accel_stats 00:30:13.910 08:22:44 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:13.910 08:22:44 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:13.910 | select(.opcode=="crc32c") 00:30:13.910 | "\(.module_name) \(.executed)"' 00:30:13.910 08:22:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:14.170 08:22:44 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:14.170 08:22:44 -- host/digest.sh@93 -- # exp_module=software 00:30:14.170 08:22:44 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:14.170 08:22:44 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:14.170 08:22:44 -- host/digest.sh@97 -- # killprocess 1243567 00:30:14.170 08:22:44 -- common/autotest_common.sh@926 -- # '[' -z 1243567 ']' 00:30:14.170 08:22:44 -- common/autotest_common.sh@930 -- # kill -0 1243567 00:30:14.170 08:22:44 -- common/autotest_common.sh@931 -- # uname 00:30:14.170 08:22:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:14.170 08:22:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1243567 00:30:14.170 08:22:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:14.170 08:22:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:14.170 08:22:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1243567' 00:30:14.170 killing process with pid 1243567 00:30:14.170 08:22:44 -- common/autotest_common.sh@945 -- # kill 1243567 00:30:14.170 Received shutdown signal, test time was about 2.000000 seconds 00:30:14.170 00:30:14.170 Latency(us) 00:30:14.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.170 =================================================================================================================== 00:30:14.170 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:14.170 08:22:44 -- common/autotest_common.sh@950 -- # wait 1243567 00:30:14.170 08:22:44 -- host/digest.sh@126 -- # killprocess 1241136 00:30:14.170 08:22:44 -- common/autotest_common.sh@926 -- # '[' -z 1241136 ']' 00:30:14.170 08:22:44 -- common/autotest_common.sh@930 -- # kill -0 1241136 00:30:14.170 08:22:44 -- common/autotest_common.sh@931 -- # uname 00:30:14.170 08:22:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:14.170 08:22:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1241136 00:30:14.170 08:22:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:14.170 08:22:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:14.170 08:22:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1241136' 00:30:14.170 killing process with pid 1241136 00:30:14.170 08:22:44 -- common/autotest_common.sh@945 -- # kill 1241136 00:30:14.170 08:22:44 -- common/autotest_common.sh@950 -- # wait 1241136 00:30:14.432 00:30:14.432 real 0m16.328s 00:30:14.432 user 0m32.074s 00:30:14.432 sys 0m3.369s 00:30:14.432 08:22:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:14.432 08:22:44 -- common/autotest_common.sh@10 -- # set +x 00:30:14.432 ************************************ 00:30:14.432 END TEST nvmf_digest_clean 00:30:14.432 ************************************ 00:30:14.432 08:22:44 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:30:14.432 08:22:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:14.432 08:22:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:14.432 08:22:44 -- common/autotest_common.sh@10 -- # set +x 00:30:14.432 ************************************ 00:30:14.432 START TEST nvmf_digest_error 00:30:14.432 ************************************ 00:30:14.432 08:22:44 -- common/autotest_common.sh@1104 -- # run_digest_error 00:30:14.432 08:22:44 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:30:14.432 08:22:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:14.432 08:22:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:14.432 08:22:44 -- common/autotest_common.sh@10 -- # set +x 00:30:14.432 08:22:44 -- nvmf/common.sh@469 -- # nvmfpid=1244287 00:30:14.432 08:22:44 -- nvmf/common.sh@470 -- # waitforlisten 1244287 00:30:14.432 08:22:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:14.432 08:22:44 -- common/autotest_common.sh@819 -- # '[' -z 1244287 ']' 00:30:14.432 08:22:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.432 08:22:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:14.432 08:22:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.432 08:22:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:14.432 08:22:44 -- common/autotest_common.sh@10 -- # set +x 00:30:14.432 [2024-06-11 08:22:45.042663] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:14.432 [2024-06-11 08:22:45.042722] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.432 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.692 [2024-06-11 08:22:45.109474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.692 [2024-06-11 08:22:45.176736] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:14.692 [2024-06-11 08:22:45.176853] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.692 [2024-06-11 08:22:45.176861] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.692 [2024-06-11 08:22:45.176869] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.692 [2024-06-11 08:22:45.176885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.263 08:22:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:15.263 08:22:45 -- common/autotest_common.sh@852 -- # return 0 00:30:15.263 08:22:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:15.263 08:22:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:15.263 08:22:45 -- common/autotest_common.sh@10 -- # set +x 00:30:15.263 08:22:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.263 08:22:45 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:15.263 08:22:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:15.263 08:22:45 -- common/autotest_common.sh@10 -- # set +x 00:30:15.263 [2024-06-11 08:22:45.846810] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:15.263 08:22:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:15.263 08:22:45 -- host/digest.sh@104 -- # common_target_config 00:30:15.263 08:22:45 -- host/digest.sh@43 -- # rpc_cmd 00:30:15.263 08:22:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:15.263 08:22:45 -- common/autotest_common.sh@10 -- # set +x 00:30:15.523 null0 00:30:15.523 [2024-06-11 08:22:45.927645] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.523 [2024-06-11 08:22:45.951828] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.523 08:22:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:15.523 08:22:45 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:30:15.524 08:22:45 -- host/digest.sh@54 -- # local rw bs qd 00:30:15.524 08:22:45 -- host/digest.sh@56 -- # rw=randread 00:30:15.524 08:22:45 -- host/digest.sh@56 -- # bs=4096 00:30:15.524 08:22:45 -- host/digest.sh@56 -- # qd=128 00:30:15.524 08:22:45 -- host/digest.sh@58 -- # bperfpid=1244633 00:30:15.524 08:22:45 -- host/digest.sh@60 -- # waitforlisten 1244633 /var/tmp/bperf.sock 00:30:15.524 08:22:45 -- common/autotest_common.sh@819 -- # '[' -z 1244633 ']' 00:30:15.524 08:22:45 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:15.524 08:22:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:15.524 08:22:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:15.524 08:22:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:15.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:15.524 08:22:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:15.524 08:22:45 -- common/autotest_common.sh@10 -- # set +x 00:30:15.524 [2024-06-11 08:22:46.001822] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:15.524 [2024-06-11 08:22:46.001872] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244633 ] 00:30:15.524 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.524 [2024-06-11 08:22:46.078131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.524 [2024-06-11 08:22:46.130477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.509 08:22:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:16.509 08:22:46 -- common/autotest_common.sh@852 -- # return 0 00:30:16.509 08:22:46 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:16.509 08:22:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:16.509 08:22:46 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:16.509 08:22:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:16.509 08:22:46 -- common/autotest_common.sh@10 -- # set +x 00:30:16.509 08:22:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:16.509 08:22:46 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:16.509 08:22:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:16.769 nvme0n1 00:30:16.769 08:22:47 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:16.769 08:22:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:16.769 08:22:47 -- common/autotest_common.sh@10 -- # set +x 00:30:16.769 08:22:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:16.769 08:22:47 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:16.769 08:22:47 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:16.769 Running I/O for 2 seconds... 00:30:16.769 [2024-06-11 08:22:47.381738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:16.769 [2024-06-11 08:22:47.381767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.769 [2024-06-11 08:22:47.381775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.769 [2024-06-11 08:22:47.390473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:16.769 [2024-06-11 08:22:47.390495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.769 [2024-06-11 08:22:47.390502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:16.769 [2024-06-11 08:22:47.404098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:16.769 [2024-06-11 08:22:47.404118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:16.769 [2024-06-11 08:22:47.404126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.030 [2024-06-11 08:22:47.418119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.030 [2024-06-11 08:22:47.418139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.030 [2024-06-11 08:22:47.418146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.030 [2024-06-11 08:22:47.431456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.030 [2024-06-11 08:22:47.431478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.030 [2024-06-11 08:22:47.431485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.030 [2024-06-11 08:22:47.443121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.030 [2024-06-11 08:22:47.443140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.030 [2024-06-11 08:22:47.443146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.030 [2024-06-11 08:22:47.457231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.030 [2024-06-11 08:22:47.457250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.030 [2024-06-11 08:22:47.457256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.030 [2024-06-11 08:22:47.471258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.030 [2024-06-11 08:22:47.471277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.030 [2024-06-11 08:22:47.471283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.030 [2024-06-11 08:22:47.483277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.030 [2024-06-11 08:22:47.483295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.030 [2024-06-11 08:22:47.483302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.030 [2024-06-11 08:22:47.496955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.030 [2024-06-11 08:22:47.496973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.030 [2024-06-11 08:22:47.496979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.030 [2024-06-11 08:22:47.510776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.030 [2024-06-11 08:22:47.510794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.030 [2024-06-11 08:22:47.510800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.030 [2024-06-11 08:22:47.524958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.030 [2024-06-11 08:22:47.524976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.030 [2024-06-11 08:22:47.524983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.030 [2024-06-11 08:22:47.539481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.030 [2024-06-11 08:22:47.539499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.030 [2024-06-11 08:22:47.539506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.030 [2024-06-11 08:22:47.553618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.030 [2024-06-11 08:22:47.553636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.030 [2024-06-11 08:22:47.553642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.030 [2024-06-11 08:22:47.567583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.030 [2024-06-11 08:22:47.567602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.030 [2024-06-11 08:22:47.567608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.030 [2024-06-11 08:22:47.581483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.030 [2024-06-11 08:22:47.581502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.030 [2024-06-11 08:22:47.581508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.030 [2024-06-11 08:22:47.595710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.030 [2024-06-11 08:22:47.595729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.030 [2024-06-11 08:22:47.595735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.030 [2024-06-11 08:22:47.609680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.030 [2024-06-11 08:22:47.609697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.030 [2024-06-11 08:22:47.609704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.030 [2024-06-11 08:22:47.623931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.030 [2024-06-11 08:22:47.623950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.030 [2024-06-11 08:22:47.623956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.030 [2024-06-11 08:22:47.637758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.030 [2024-06-11 08:22:47.637776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.030 [2024-06-11 08:22:47.637782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.030 [2024-06-11 08:22:47.651744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.030 [2024-06-11 08:22:47.651762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.030 [2024-06-11 08:22:47.651768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.030 [2024-06-11 08:22:47.665500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.030 [2024-06-11 08:22:47.665518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.030 [2024-06-11 08:22:47.665527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.291 [2024-06-11 08:22:47.679497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.291 [2024-06-11 08:22:47.679515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.291 [2024-06-11 08:22:47.679521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.291 [2024-06-11 08:22:47.693484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.291 [2024-06-11 08:22:47.693502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.291 [2024-06-11 08:22:47.693509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.291 [2024-06-11 08:22:47.707310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.291 [2024-06-11 08:22:47.707329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.291 [2024-06-11 08:22:47.707335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.291 [2024-06-11 08:22:47.719318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.291 [2024-06-11 08:22:47.719336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.291 [2024-06-11 08:22:47.719342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.291 [2024-06-11 08:22:47.734273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.291 [2024-06-11 08:22:47.734291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.291 [2024-06-11 08:22:47.734297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.291 [2024-06-11 08:22:47.748275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.291 [2024-06-11 08:22:47.748293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.291 [2024-06-11 08:22:47.748299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.291 [2024-06-11 08:22:47.762412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.291 [2024-06-11 08:22:47.762430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.291 [2024-06-11 08:22:47.762444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.291 [2024-06-11 08:22:47.771255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.291 [2024-06-11 08:22:47.771273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.291 [2024-06-11 08:22:47.771280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.291 [2024-06-11 08:22:47.785167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.291 [2024-06-11 08:22:47.785185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.291 [2024-06-11 08:22:47.785192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.291 [2024-06-11 08:22:47.798420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.291 [2024-06-11 08:22:47.798441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.292 [2024-06-11 08:22:47.798447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.292 [2024-06-11 08:22:47.818401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.292 [2024-06-11 08:22:47.818420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.292 [2024-06-11 08:22:47.818426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.292 [2024-06-11 08:22:47.832143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.292 [2024-06-11 08:22:47.832161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.292 [2024-06-11 08:22:47.832168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.292 [2024-06-11 08:22:47.845576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.292 [2024-06-11 08:22:47.845594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.292 [2024-06-11 08:22:47.845601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.292 [2024-06-11 08:22:47.859656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.292 [2024-06-11 08:22:47.859674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.292 [2024-06-11 08:22:47.859680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.292 [2024-06-11 08:22:47.872651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.292 [2024-06-11 08:22:47.872674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.292 [2024-06-11 08:22:47.872684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.292 [2024-06-11 08:22:47.887079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.292 [2024-06-11 08:22:47.887097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.292 [2024-06-11 08:22:47.887103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.292 [2024-06-11 08:22:47.901687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.292 [2024-06-11 08:22:47.901705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.292 [2024-06-11 08:22:47.901717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.292 [2024-06-11 08:22:47.915315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.292 [2024-06-11 08:22:47.915333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.292 [2024-06-11 08:22:47.915340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.292 [2024-06-11 08:22:47.929751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.292 [2024-06-11 08:22:47.929773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.292 [2024-06-11 08:22:47.929780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.553 [2024-06-11 08:22:47.943333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.553 [2024-06-11 08:22:47.943351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.553 [2024-06-11 08:22:47.943358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.553 [2024-06-11 08:22:47.958335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.553 [2024-06-11 08:22:47.958353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.553 [2024-06-11 08:22:47.958359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.553 [2024-06-11 08:22:47.972190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.553 [2024-06-11 08:22:47.972209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.553 [2024-06-11 08:22:47.972216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.553 [2024-06-11 08:22:47.984481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.553 [2024-06-11 08:22:47.984499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.553 [2024-06-11 08:22:47.984506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.553 [2024-06-11 08:22:47.996932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.553 [2024-06-11 08:22:47.996952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.553 [2024-06-11 08:22:47.996958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.553 [2024-06-11 08:22:48.006322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.553 [2024-06-11 08:22:48.006339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.553 [2024-06-11 08:22:48.006345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.553 [2024-06-11 08:22:48.020819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.553 [2024-06-11 08:22:48.020841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.553 [2024-06-11 08:22:48.020847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.553 [2024-06-11 08:22:48.034399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.553 [2024-06-11 08:22:48.034417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.553 [2024-06-11 08:22:48.034423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.553 [2024-06-11 08:22:48.048959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.553 [2024-06-11 08:22:48.048977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.553 [2024-06-11 08:22:48.048983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.553 [2024-06-11 08:22:48.062587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.553 [2024-06-11 08:22:48.062605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.553 [2024-06-11 08:22:48.062611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.553 [2024-06-11 08:22:48.077284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.553 [2024-06-11 08:22:48.077302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.553 [2024-06-11 08:22:48.077308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.553 [2024-06-11 08:22:48.091423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.553 [2024-06-11 08:22:48.091445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.553 [2024-06-11 08:22:48.091456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.553 [2024-06-11 08:22:48.104655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.553 [2024-06-11 08:22:48.104673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.553 [2024-06-11 08:22:48.104679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.553 [2024-06-11 08:22:48.118998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.553 [2024-06-11 08:22:48.119015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.553 [2024-06-11 08:22:48.119022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.553 [2024-06-11 08:22:48.133419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.553 [2024-06-11 08:22:48.133441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.553 [2024-06-11 08:22:48.133447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.553 [2024-06-11 08:22:48.147687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.553 [2024-06-11 08:22:48.147705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.553 [2024-06-11 08:22:48.147711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.553 [2024-06-11 08:22:48.161903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.554 [2024-06-11 08:22:48.161921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.554 [2024-06-11 08:22:48.161927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.554 [2024-06-11 08:22:48.176199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.554 [2024-06-11 08:22:48.176217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.554 [2024-06-11 08:22:48.176223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.554 [2024-06-11 08:22:48.190843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.554 [2024-06-11 08:22:48.190860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.554 [2024-06-11 08:22:48.190866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.814 [2024-06-11 08:22:48.205208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.814 [2024-06-11 08:22:48.205225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.814 [2024-06-11 08:22:48.205232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.814 [2024-06-11 08:22:48.220004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.814 [2024-06-11 08:22:48.220022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.815 [2024-06-11 08:22:48.220029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.815 [2024-06-11 08:22:48.234090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.815 [2024-06-11 08:22:48.234108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.815 [2024-06-11 08:22:48.234114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.815 [2024-06-11 08:22:48.248102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.815 [2024-06-11 08:22:48.248121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.815 [2024-06-11 08:22:48.248127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.815 [2024-06-11 08:22:48.262099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.815 [2024-06-11 08:22:48.262117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.815 [2024-06-11 08:22:48.262127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.815 [2024-06-11 08:22:48.281939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.815 [2024-06-11 08:22:48.281957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.815 [2024-06-11 08:22:48.281963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.815 [2024-06-11 08:22:48.296170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.815 [2024-06-11 08:22:48.296189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.815 [2024-06-11 08:22:48.296195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.815 [2024-06-11 08:22:48.310667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.815 [2024-06-11 08:22:48.310685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.815 [2024-06-11 08:22:48.310692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.815 [2024-06-11 08:22:48.324131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.815 [2024-06-11 08:22:48.324153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.815 [2024-06-11 08:22:48.324164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.815 [2024-06-11 08:22:48.337632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.815 [2024-06-11 08:22:48.337650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.815 [2024-06-11 08:22:48.337656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.815 [2024-06-11 08:22:48.351773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.815 [2024-06-11 08:22:48.351795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.815 [2024-06-11 08:22:48.351802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.815 [2024-06-11 08:22:48.365836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.815 [2024-06-11 08:22:48.365854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.815 [2024-06-11 08:22:48.365861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.815 [2024-06-11 08:22:48.379541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.815 [2024-06-11 08:22:48.379561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.815 [2024-06-11 08:22:48.379571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.815 [2024-06-11 08:22:48.394136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.815 [2024-06-11 08:22:48.394154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.815 [2024-06-11 08:22:48.394160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.815 [2024-06-11 08:22:48.408833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.815 [2024-06-11 08:22:48.408851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.815 [2024-06-11 08:22:48.408857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.815 [2024-06-11 08:22:48.422636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.815 [2024-06-11 08:22:48.422654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.815 [2024-06-11 08:22:48.422660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.815 [2024-06-11 08:22:48.437512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.815 [2024-06-11 08:22:48.437531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.815 [2024-06-11 08:22:48.437538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:17.815 [2024-06-11 08:22:48.452038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:17.815 [2024-06-11 08:22:48.452056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:17.815 [2024-06-11 08:22:48.452063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.076 [2024-06-11 08:22:48.466530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.076 [2024-06-11 08:22:48.466551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.076 [2024-06-11 08:22:48.466558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.076 [2024-06-11 08:22:48.480560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.076 [2024-06-11 08:22:48.480581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.076 [2024-06-11 08:22:48.480587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.076 [2024-06-11 08:22:48.494892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.076 [2024-06-11 08:22:48.494913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.076 [2024-06-11 08:22:48.494921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.076 [2024-06-11 08:22:48.509412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.076 [2024-06-11 08:22:48.509430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.076 [2024-06-11 08:22:48.509444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.076 [2024-06-11 08:22:48.522736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.076 [2024-06-11 08:22:48.522754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.076 [2024-06-11 08:22:48.522761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.076 [2024-06-11 08:22:48.537658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.076 [2024-06-11 08:22:48.537676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.076 [2024-06-11 08:22:48.537682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.076 [2024-06-11 08:22:48.552093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.076 [2024-06-11 08:22:48.552111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.076 [2024-06-11 08:22:48.552117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.076 [2024-06-11 08:22:48.565932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.076 [2024-06-11 08:22:48.565949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.076 [2024-06-11 08:22:48.565955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.076 [2024-06-11 08:22:48.579550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.076 [2024-06-11 08:22:48.579568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.076 [2024-06-11 08:22:48.579574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.076 [2024-06-11 08:22:48.588381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.077 [2024-06-11 08:22:48.588398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.077 [2024-06-11 08:22:48.588404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.077 [2024-06-11 08:22:48.602291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.077 [2024-06-11 08:22:48.602308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.077 [2024-06-11 08:22:48.602314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.077 [2024-06-11 08:22:48.615784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.077 [2024-06-11 08:22:48.615802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.077 [2024-06-11 08:22:48.615808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.077 [2024-06-11 08:22:48.627932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.077 [2024-06-11 08:22:48.627953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.077 [2024-06-11 08:22:48.627960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.077 [2024-06-11 08:22:48.641624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.077 [2024-06-11 08:22:48.641643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.077 [2024-06-11 08:22:48.641649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.077 [2024-06-11 08:22:48.655157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.077 [2024-06-11 08:22:48.655175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.077 [2024-06-11 08:22:48.655182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.077 [2024-06-11 08:22:48.668103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.077 [2024-06-11 08:22:48.668121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.077 [2024-06-11 08:22:48.668128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.077 [2024-06-11 08:22:48.682800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.077 [2024-06-11 08:22:48.682818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.077 [2024-06-11 08:22:48.682824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.077 [2024-06-11 08:22:48.697381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.077 [2024-06-11 08:22:48.697399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.077 [2024-06-11 08:22:48.697405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.077 [2024-06-11 08:22:48.712035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.077 [2024-06-11 08:22:48.712054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.077 [2024-06-11 08:22:48.712061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.338 [2024-06-11 08:22:48.725869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.338 [2024-06-11 08:22:48.725886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.338 [2024-06-11 08:22:48.725893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.338 [2024-06-11 08:22:48.740178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.338 [2024-06-11 08:22:48.740196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.338 [2024-06-11 08:22:48.740202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.338 [2024-06-11 08:22:48.753728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.338 [2024-06-11 08:22:48.753746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.338 [2024-06-11 08:22:48.753752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.338 [2024-06-11 08:22:48.768056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.338 [2024-06-11 08:22:48.768075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.338 [2024-06-11 08:22:48.768081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.338 [2024-06-11 08:22:48.781315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.338 [2024-06-11 08:22:48.781333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.338 [2024-06-11 08:22:48.781340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.338 [2024-06-11 08:22:48.794252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.338 [2024-06-11 08:22:48.794270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.338 [2024-06-11 08:22:48.794277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.338 [2024-06-11 08:22:48.807946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.338 [2024-06-11 08:22:48.807966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.338 [2024-06-11 08:22:48.807975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.338 [2024-06-11 08:22:48.820647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.338 [2024-06-11 08:22:48.820665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.338 [2024-06-11 08:22:48.820671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.338 [2024-06-11 08:22:48.833857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.338 [2024-06-11 08:22:48.833876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.338 [2024-06-11 08:22:48.833882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.338 [2024-06-11 08:22:48.847771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.338 [2024-06-11 08:22:48.847789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.338 [2024-06-11 08:22:48.847795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.338 [2024-06-11 08:22:48.861538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.338 [2024-06-11 08:22:48.861556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.338 [2024-06-11 08:22:48.861566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.338 [2024-06-11 08:22:48.875736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.338 [2024-06-11 08:22:48.875754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.338 [2024-06-11 08:22:48.875761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.338 [2024-06-11 08:22:48.888087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.338 [2024-06-11 08:22:48.888107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.338 [2024-06-11 08:22:48.888116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.338 [2024-06-11 08:22:48.902221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.339 [2024-06-11 08:22:48.902239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.339 [2024-06-11 08:22:48.902246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.339 [2024-06-11 08:22:48.915066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.339 [2024-06-11 08:22:48.915084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.339 [2024-06-11 08:22:48.915091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.339 [2024-06-11 08:22:48.927202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.339 [2024-06-11 08:22:48.927220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.339 [2024-06-11 08:22:48.927227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.339 [2024-06-11 08:22:48.940346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.339 [2024-06-11 08:22:48.940365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.339 [2024-06-11 08:22:48.940372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.339 [2024-06-11 08:22:48.953985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.339 [2024-06-11 08:22:48.954003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.339 [2024-06-11 08:22:48.954010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.339 [2024-06-11 08:22:48.968467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.339 [2024-06-11 08:22:48.968485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.339 [2024-06-11 08:22:48.968492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.339 [2024-06-11 08:22:48.981508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.339 [2024-06-11 08:22:48.981529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.339 [2024-06-11 08:22:48.981538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.600 [2024-06-11 08:22:48.996154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.600 [2024-06-11 08:22:48.996173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.600 [2024-06-11 08:22:48.996179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.600 [2024-06-11 08:22:49.004800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.600 [2024-06-11 08:22:49.004818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.600 [2024-06-11 08:22:49.004825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.600 [2024-06-11 08:22:49.018832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.600 [2024-06-11 08:22:49.018850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.600 [2024-06-11 08:22:49.018856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.600 [2024-06-11 08:22:49.031573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.600 [2024-06-11 08:22:49.031590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.600 [2024-06-11 08:22:49.031597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.600 [2024-06-11 08:22:49.044762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.600 [2024-06-11 08:22:49.044779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.600 [2024-06-11 08:22:49.044785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.600 [2024-06-11 08:22:49.057980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.600 [2024-06-11 08:22:49.057998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.600 [2024-06-11 08:22:49.058005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.600 [2024-06-11 08:22:49.070201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.600 [2024-06-11 08:22:49.070219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.600 [2024-06-11 08:22:49.070226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.600 [2024-06-11 08:22:49.082892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.600 [2024-06-11 08:22:49.082909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.600 [2024-06-11 08:22:49.082919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.600 [2024-06-11 08:22:49.095652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.600 [2024-06-11 08:22:49.095670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.600 [2024-06-11 08:22:49.095676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.600 [2024-06-11 08:22:49.108114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.600 [2024-06-11 08:22:49.108133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.600 [2024-06-11 08:22:49.108139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.600 [2024-06-11 08:22:49.120885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.600 [2024-06-11 08:22:49.120903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.600 [2024-06-11 08:22:49.120909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.600 [2024-06-11 08:22:49.134028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.600 [2024-06-11 08:22:49.134046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.600 [2024-06-11 08:22:49.134052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.600 [2024-06-11 08:22:49.148476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.600 [2024-06-11 08:22:49.148494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.600 [2024-06-11 08:22:49.148500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.600 [2024-06-11 08:22:49.161500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.600 [2024-06-11 08:22:49.161523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.600 [2024-06-11 08:22:49.161530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.600 [2024-06-11 08:22:49.170939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.600 [2024-06-11 08:22:49.170957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.600 [2024-06-11 08:22:49.170963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.600 [2024-06-11 08:22:49.185537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.600 [2024-06-11 08:22:49.185554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.600 [2024-06-11 08:22:49.185561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.600 [2024-06-11 08:22:49.197864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.600 [2024-06-11 08:22:49.197885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.600 [2024-06-11 08:22:49.197891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.600 [2024-06-11 08:22:49.211992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.600 [2024-06-11 08:22:49.212011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.600 [2024-06-11 08:22:49.212017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.600 [2024-06-11 08:22:49.225134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.600 [2024-06-11 08:22:49.225152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.600 [2024-06-11 08:22:49.225159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.600 [2024-06-11 08:22:49.238251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.600 [2024-06-11 08:22:49.238269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.601 [2024-06-11 08:22:49.238276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.861 [2024-06-11 08:22:49.250926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.861 [2024-06-11 08:22:49.250944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.861 [2024-06-11 08:22:49.250950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.861 [2024-06-11 08:22:49.265424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.861 [2024-06-11 08:22:49.265450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.862 [2024-06-11 08:22:49.265457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.862 [2024-06-11 08:22:49.278545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.862 [2024-06-11 08:22:49.278563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.862 [2024-06-11 08:22:49.278570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.862 [2024-06-11 08:22:49.292641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.862 [2024-06-11 08:22:49.292659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.862 [2024-06-11 08:22:49.292666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.862 [2024-06-11 08:22:49.305064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.862 [2024-06-11 08:22:49.305082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.862 [2024-06-11 08:22:49.305088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.862 [2024-06-11 08:22:49.318474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.862 [2024-06-11 08:22:49.318491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.862 [2024-06-11 08:22:49.318497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.862 [2024-06-11 08:22:49.332640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.862 [2024-06-11 08:22:49.332659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.862 [2024-06-11 08:22:49.332666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.862 [2024-06-11 08:22:49.346191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.862 [2024-06-11 08:22:49.346210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.862 [2024-06-11 08:22:49.346216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.862 [2024-06-11 08:22:49.358936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12b2070) 00:30:18.862 [2024-06-11 08:22:49.358954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.862 [2024-06-11 08:22:49.358960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:18.862 00:30:18.862 Latency(us) 00:30:18.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.862 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:18.862 nvme0n1 : 2.00 18651.83 72.86 0.00 0.00 6858.38 1802.24 21080.75 00:30:18.862 =================================================================================================================== 00:30:18.862 Total : 18651.83 72.86 0.00 0.00 6858.38 1802.24 21080.75 00:30:18.862 0 00:30:18.862 08:22:49 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:18.862 08:22:49 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:18.862 08:22:49 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:18.862 | .driver_specific 00:30:18.862 | .nvme_error 00:30:18.862 | .status_code 00:30:18.862 | .command_transient_transport_error' 00:30:18.862 08:22:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:19.122 08:22:49 -- host/digest.sh@71 -- # (( 146 > 0 )) 00:30:19.122 08:22:49 -- host/digest.sh@73 -- # killprocess 1244633 00:30:19.122 08:22:49 -- common/autotest_common.sh@926 -- # '[' -z 1244633 ']' 00:30:19.122 08:22:49 -- common/autotest_common.sh@930 -- # kill -0 1244633 00:30:19.122 08:22:49 -- common/autotest_common.sh@931 -- # uname 00:30:19.122 08:22:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:19.122 08:22:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1244633 00:30:19.122 08:22:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:19.122 08:22:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:19.122 08:22:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1244633' 00:30:19.122 killing process with pid 1244633 00:30:19.122 08:22:49 -- common/autotest_common.sh@945 -- # kill 1244633 00:30:19.122 Received shutdown signal, test time was about 2.000000 seconds 00:30:19.122 00:30:19.122 Latency(us) 00:30:19.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.122 =================================================================================================================== 00:30:19.122 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:19.122 08:22:49 -- common/autotest_common.sh@950 -- # wait 1244633 00:30:19.122 08:22:49 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:30:19.122 08:22:49 -- host/digest.sh@54 -- # local rw bs qd 00:30:19.122 08:22:49 -- host/digest.sh@56 -- # rw=randread 00:30:19.122 08:22:49 -- host/digest.sh@56 -- # bs=131072 00:30:19.122 08:22:49 -- host/digest.sh@56 -- # qd=16 00:30:19.122 08:22:49 -- host/digest.sh@58 -- # bperfpid=1245332 00:30:19.122 08:22:49 -- host/digest.sh@60 -- # waitforlisten 1245332 /var/tmp/bperf.sock 00:30:19.122 08:22:49 -- common/autotest_common.sh@819 -- # '[' -z 1245332 ']' 00:30:19.122 08:22:49 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:19.123 08:22:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:19.123 08:22:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:19.123 08:22:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:19.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:19.123 08:22:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:19.123 08:22:49 -- common/autotest_common.sh@10 -- # set +x 00:30:19.123 [2024-06-11 08:22:49.762059] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:19.123 [2024-06-11 08:22:49.762130] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1245332 ] 00:30:19.123 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:19.123 Zero copy mechanism will not be used. 00:30:19.382 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.382 [2024-06-11 08:22:49.839540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.382 [2024-06-11 08:22:49.889403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.953 08:22:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:19.953 08:22:50 -- common/autotest_common.sh@852 -- # return 0 00:30:19.953 08:22:50 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:19.953 08:22:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:20.213 08:22:50 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:20.213 08:22:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.213 08:22:50 -- common/autotest_common.sh@10 -- # set +x 00:30:20.213 08:22:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:20.213 08:22:50 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:20.213 08:22:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:20.473 nvme0n1 00:30:20.473 08:22:51 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:20.473 08:22:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:20.473 08:22:51 -- common/autotest_common.sh@10 -- # set +x 00:30:20.473 08:22:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:20.473 08:22:51 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:20.473 08:22:51 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:20.734 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:20.734 Zero copy mechanism will not be used. 00:30:20.734 Running I/O for 2 seconds... 00:30:20.734 [2024-06-11 08:22:51.201733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.201766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.201781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.206331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.206351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.206358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.211179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.211198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.211205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.216573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.216590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.216597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.228631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.228649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.228655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.234713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.234732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.234738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.241944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.241961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.241967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.244956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.244972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.244979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.253080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.253097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.253103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.263308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.263328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.263334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.273447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.273464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.273470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.284586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.284603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.284610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.294558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.294575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.294581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.303540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.303557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.303563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.312727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.312743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.312749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.323386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.323403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.323409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.327596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.327613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.327619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.331476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.331492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.331498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.335285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.335301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.335308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.339614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.339630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.339636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.346314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.346330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.346337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.355844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.355860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.355866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.366081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.366097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.366104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.734 [2024-06-11 08:22:51.375925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.734 [2024-06-11 08:22:51.375942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.734 [2024-06-11 08:22:51.375948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.996 [2024-06-11 08:22:51.385841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.996 [2024-06-11 08:22:51.385858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.996 [2024-06-11 08:22:51.385864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.996 [2024-06-11 08:22:51.395443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.996 [2024-06-11 08:22:51.395459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.996 [2024-06-11 08:22:51.395465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.996 [2024-06-11 08:22:51.402382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.996 [2024-06-11 08:22:51.402399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.996 [2024-06-11 08:22:51.402408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.996 [2024-06-11 08:22:51.406620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.996 [2024-06-11 08:22:51.406637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.996 [2024-06-11 08:22:51.406643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.996 [2024-06-11 08:22:51.414993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.996 [2024-06-11 08:22:51.415010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.996 [2024-06-11 08:22:51.415017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.996 [2024-06-11 08:22:51.424722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.996 [2024-06-11 08:22:51.424739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.996 [2024-06-11 08:22:51.424745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.996 [2024-06-11 08:22:51.431396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.996 [2024-06-11 08:22:51.431413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.996 [2024-06-11 08:22:51.431420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.996 [2024-06-11 08:22:51.438302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.996 [2024-06-11 08:22:51.438319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.996 [2024-06-11 08:22:51.438325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.996 [2024-06-11 08:22:51.446056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.996 [2024-06-11 08:22:51.446074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.996 [2024-06-11 08:22:51.446080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.996 [2024-06-11 08:22:51.453961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.996 [2024-06-11 08:22:51.453979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.996 [2024-06-11 08:22:51.453985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.996 [2024-06-11 08:22:51.462495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.996 [2024-06-11 08:22:51.462512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.997 [2024-06-11 08:22:51.462518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.997 [2024-06-11 08:22:51.471103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.997 [2024-06-11 08:22:51.471120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.997 [2024-06-11 08:22:51.471127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.997 [2024-06-11 08:22:51.478139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.997 [2024-06-11 08:22:51.478156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.997 [2024-06-11 08:22:51.478162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.997 [2024-06-11 08:22:51.487709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.997 [2024-06-11 08:22:51.487725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.997 [2024-06-11 08:22:51.487732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.997 [2024-06-11 08:22:51.493282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.997 [2024-06-11 08:22:51.493298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.997 [2024-06-11 08:22:51.493305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.997 [2024-06-11 08:22:51.502688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.997 [2024-06-11 08:22:51.502706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.997 [2024-06-11 08:22:51.502713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.997 [2024-06-11 08:22:51.509787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.997 [2024-06-11 08:22:51.509804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.997 [2024-06-11 08:22:51.509810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.997 [2024-06-11 08:22:51.519904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.997 [2024-06-11 08:22:51.519921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.997 [2024-06-11 08:22:51.519927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.997 [2024-06-11 08:22:51.528306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.997 [2024-06-11 08:22:51.528323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.997 [2024-06-11 08:22:51.528329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.997 [2024-06-11 08:22:51.538853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.997 [2024-06-11 08:22:51.538870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.997 [2024-06-11 08:22:51.538880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.997 [2024-06-11 08:22:51.548748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.997 [2024-06-11 08:22:51.548765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.997 [2024-06-11 08:22:51.548772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.997 [2024-06-11 08:22:51.559267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.997 [2024-06-11 08:22:51.559284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.997 [2024-06-11 08:22:51.559291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.997 [2024-06-11 08:22:51.569144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.997 [2024-06-11 08:22:51.569161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.997 [2024-06-11 08:22:51.569167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.997 [2024-06-11 08:22:51.578622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.997 [2024-06-11 08:22:51.578640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.997 [2024-06-11 08:22:51.578646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.997 [2024-06-11 08:22:51.588921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.997 [2024-06-11 08:22:51.588938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.997 [2024-06-11 08:22:51.588945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.997 [2024-06-11 08:22:51.599460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.997 [2024-06-11 08:22:51.599477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.997 [2024-06-11 08:22:51.599491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:20.997 [2024-06-11 08:22:51.607398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.997 [2024-06-11 08:22:51.607415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.997 [2024-06-11 08:22:51.607421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:20.997 [2024-06-11 08:22:51.617240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.997 [2024-06-11 08:22:51.617258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.997 [2024-06-11 08:22:51.617264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:20.997 [2024-06-11 08:22:51.626819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.997 [2024-06-11 08:22:51.626839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.997 [2024-06-11 08:22:51.626845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:20.997 [2024-06-11 08:22:51.635914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:20.997 [2024-06-11 08:22:51.635930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:20.997 [2024-06-11 08:22:51.635936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.258 [2024-06-11 08:22:51.645012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.258 [2024-06-11 08:22:51.645028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.258 [2024-06-11 08:22:51.645035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.258 [2024-06-11 08:22:51.655750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.258 [2024-06-11 08:22:51.655767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.258 [2024-06-11 08:22:51.655774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.258 [2024-06-11 08:22:51.665598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.258 [2024-06-11 08:22:51.665614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.258 [2024-06-11 08:22:51.665621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.258 [2024-06-11 08:22:51.677548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.258 [2024-06-11 08:22:51.677565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.258 [2024-06-11 08:22:51.677571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.258 [2024-06-11 08:22:51.686662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.686679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.686686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.695317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.695335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.695341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.702381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.702399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.702405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.711903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.711920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.711926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.722855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.722872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.722878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.733564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.733581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.733587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.743950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.743967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.743973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.755261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.755279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.755285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.764630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.764647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.764653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.775865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.775882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.775889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.785702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.785719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.785725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.796578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.796595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.796604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.804268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.804285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.804291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.810792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.810809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.810815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.818573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.818590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.818597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.827789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.827806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.827812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.833178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.833194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.833201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.840733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.840749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.840755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.847695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.847712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.847718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.856968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.856985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.856991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.867082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.867102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.867110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.878743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.878760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.878766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.889305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.889322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.889328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.259 [2024-06-11 08:22:51.900270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.259 [2024-06-11 08:22:51.900286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.259 [2024-06-11 08:22:51.900293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:51.911084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:51.911101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.521 [2024-06-11 08:22:51.911107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:51.921618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:51.921635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.521 [2024-06-11 08:22:51.921641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:51.931804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:51.931820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.521 [2024-06-11 08:22:51.931827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:51.941731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:51.941747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.521 [2024-06-11 08:22:51.941754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:51.952450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:51.952466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.521 [2024-06-11 08:22:51.952473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:51.963253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:51.963269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.521 [2024-06-11 08:22:51.963275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:51.974784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:51.974801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.521 [2024-06-11 08:22:51.974807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:51.984310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:51.984327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.521 [2024-06-11 08:22:51.984333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:51.994951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:51.994968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.521 [2024-06-11 08:22:51.994974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:52.005322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:52.005339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.521 [2024-06-11 08:22:52.005345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:52.015562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:52.015578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.521 [2024-06-11 08:22:52.015584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:52.025298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:52.025314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.521 [2024-06-11 08:22:52.025320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:52.033735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:52.033752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.521 [2024-06-11 08:22:52.033759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:52.041795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:52.041812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.521 [2024-06-11 08:22:52.041821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:52.048332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:52.048348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.521 [2024-06-11 08:22:52.048354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:52.055997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:52.056014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.521 [2024-06-11 08:22:52.056020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:52.060492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:52.060509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.521 [2024-06-11 08:22:52.060515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:52.064784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:52.064801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.521 [2024-06-11 08:22:52.064807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:52.071843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:52.071860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.521 [2024-06-11 08:22:52.071866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:52.080296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:52.080313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.521 [2024-06-11 08:22:52.080319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:52.090395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:52.090412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.521 [2024-06-11 08:22:52.090418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:52.100824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:52.100841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.521 [2024-06-11 08:22:52.100847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.521 [2024-06-11 08:22:52.111009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.521 [2024-06-11 08:22:52.111029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.522 [2024-06-11 08:22:52.111035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.522 [2024-06-11 08:22:52.119831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.522 [2024-06-11 08:22:52.119849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.522 [2024-06-11 08:22:52.119855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.522 [2024-06-11 08:22:52.129856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.522 [2024-06-11 08:22:52.129873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.522 [2024-06-11 08:22:52.129879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.522 [2024-06-11 08:22:52.136118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.522 [2024-06-11 08:22:52.136136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.522 [2024-06-11 08:22:52.136142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.522 [2024-06-11 08:22:52.146060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.522 [2024-06-11 08:22:52.146077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.522 [2024-06-11 08:22:52.146083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.522 [2024-06-11 08:22:52.155517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.522 [2024-06-11 08:22:52.155535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.522 [2024-06-11 08:22:52.155541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.522 [2024-06-11 08:22:52.164326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.522 [2024-06-11 08:22:52.164344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.522 [2024-06-11 08:22:52.164350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.174563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.174582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.174588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.185042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.185059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.185069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.196854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.196871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.196877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.208320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.208338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.208344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.218330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.218347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.218354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.229899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.229916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.229922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.240489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.240506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.240512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.249595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.249612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.249618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.258830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.258847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.258853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.266475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.266492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.266498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.276315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.276335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.276341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.286809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.286826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.286832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.294989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.295007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.295013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.305785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.305803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.305809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.315092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.315109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.315116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.322901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.322918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.322925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.333114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.333131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.333137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.341936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.341953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.341960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.352057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.352074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.352080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.361430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.361451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.361457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.369231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.369248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.369255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.379521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.379538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.379544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.386842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.386859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.386866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.394467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.394484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.394491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.405054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.405071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.405077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.414261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.414278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.414285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:21.782 [2024-06-11 08:22:52.423642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:21.782 [2024-06-11 08:22:52.423660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.782 [2024-06-11 08:22:52.423666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.435056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.435074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.435084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.445963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.445980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.445987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.457566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.457583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.457590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.469388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.469405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.469412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.481384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.481402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.481408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.492543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.492561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.492567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.502092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.502110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.502116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.511731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.511748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.511754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.520661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.520678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.520684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.528806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.528879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.528886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.534035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.534052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.534058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.542996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.543013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.543020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.551625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.551642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.551648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.560458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.560475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.560481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.568218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.568235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.568241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.578229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.578246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.578252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.589630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.589648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.589654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.600966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.600983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.600990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.611354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.611372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.611378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.621756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.621773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.621780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.632294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.632312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.632318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.643019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.643036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.643043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.652900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.652917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.652923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.663278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.663296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.663302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.675076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.043 [2024-06-11 08:22:52.675094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.043 [2024-06-11 08:22:52.675100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.043 [2024-06-11 08:22:52.686347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.044 [2024-06-11 08:22:52.686364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.044 [2024-06-11 08:22:52.686370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.305 [2024-06-11 08:22:52.694631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.305 [2024-06-11 08:22:52.694648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.305 [2024-06-11 08:22:52.694658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.305 [2024-06-11 08:22:52.703543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.305 [2024-06-11 08:22:52.703560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.305 [2024-06-11 08:22:52.703566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.305 [2024-06-11 08:22:52.713075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.305 [2024-06-11 08:22:52.713092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.305 [2024-06-11 08:22:52.713098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.305 [2024-06-11 08:22:52.721446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.305 [2024-06-11 08:22:52.721463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.305 [2024-06-11 08:22:52.721470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.305 [2024-06-11 08:22:52.728826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.305 [2024-06-11 08:22:52.728843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.305 [2024-06-11 08:22:52.728849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.305 [2024-06-11 08:22:52.738121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.305 [2024-06-11 08:22:52.738139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.305 [2024-06-11 08:22:52.738145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.305 [2024-06-11 08:22:52.745023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.305 [2024-06-11 08:22:52.745039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.745045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.750069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.750086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.750093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.758564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.758582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.758588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.764234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.764255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.764261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.774587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.774604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.774611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.785643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.785660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.785666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.794812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.794829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.794835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.802294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.802312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.802318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.808077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.808094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.808101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.816218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.816236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.816242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.825518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.825535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.825542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.832834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.832851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.832857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.843551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.843569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.843575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.852873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.852890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.852896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.863931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.863948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.863954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.875265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.875282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.875288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.886171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.886189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.886195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.895500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.895517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.895524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.904683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.904700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.904706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.914400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.914417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.914423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.924677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.924697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.924704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.932285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.932303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.932309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.941346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.941363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.941369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.306 [2024-06-11 08:22:52.950254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.306 [2024-06-11 08:22:52.950271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.306 [2024-06-11 08:22:52.950277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.567 [2024-06-11 08:22:52.960226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.567 [2024-06-11 08:22:52.960244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:52.960250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:52.970763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:52.970780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:52.970787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:52.978531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:52.978548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:52.978554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:52.987391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:52.987408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:52.987414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:52.997973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:52.997990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:52.997996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:53.006385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:53.006402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:53.006408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:53.012828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:53.012845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:53.012851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:53.020647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:53.020664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:53.020670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:53.027592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:53.027609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:53.027616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:53.035527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:53.035544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:53.035550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:53.044702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:53.044718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:53.044725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:53.053530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:53.053547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:53.053553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:53.059841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:53.059858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:53.059864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:53.065607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:53.065625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:53.065634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:53.075198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:53.075216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:53.075222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:53.084194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:53.084211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:53.084216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:53.089589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:53.089607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:53.089613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:53.099928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:53.099945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:53.099951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:53.109700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:53.109717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:53.109723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:53.118543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:53.118560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:53.118566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:53.125738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:53.125754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:53.125761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:53.135999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:53.136016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:53.136022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:53.143478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:53.143499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.568 [2024-06-11 08:22:53.143505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.568 [2024-06-11 08:22:53.148865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.568 [2024-06-11 08:22:53.148882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.569 [2024-06-11 08:22:53.148888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.569 [2024-06-11 08:22:53.153261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.569 [2024-06-11 08:22:53.153278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.569 [2024-06-11 08:22:53.153284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.569 [2024-06-11 08:22:53.161193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.569 [2024-06-11 08:22:53.161209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.569 [2024-06-11 08:22:53.161216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:22.569 [2024-06-11 08:22:53.170916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.569 [2024-06-11 08:22:53.170933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.569 [2024-06-11 08:22:53.170939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:22.569 [2024-06-11 08:22:53.181425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.569 [2024-06-11 08:22:53.181446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.569 [2024-06-11 08:22:53.181453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:22.569 [2024-06-11 08:22:53.193385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1851d00) 00:30:22.569 [2024-06-11 08:22:53.193402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:22.569 [2024-06-11 08:22:53.193409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:22.569 00:30:22.569 Latency(us) 00:30:22.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.569 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:22.569 nvme0n1 : 2.00 3460.10 432.51 0.00 0.00 4620.09 651.95 12561.07 00:30:22.569 =================================================================================================================== 00:30:22.569 Total : 3460.10 432.51 0.00 0.00 4620.09 651.95 12561.07 00:30:22.569 0 00:30:22.830 08:22:53 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:22.830 08:22:53 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:22.830 08:22:53 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:22.830 | .driver_specific 00:30:22.830 | .nvme_error 00:30:22.830 | .status_code 00:30:22.830 | .command_transient_transport_error' 00:30:22.830 08:22:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:22.830 08:22:53 -- host/digest.sh@71 -- # (( 223 > 0 )) 00:30:22.830 08:22:53 -- host/digest.sh@73 -- # killprocess 1245332 00:30:22.830 08:22:53 -- common/autotest_common.sh@926 -- # '[' -z 1245332 ']' 00:30:22.830 08:22:53 -- common/autotest_common.sh@930 -- # kill -0 1245332 00:30:22.830 08:22:53 -- common/autotest_common.sh@931 -- # uname 00:30:22.830 08:22:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:22.830 08:22:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1245332 00:30:22.830 08:22:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:22.830 08:22:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:22.830 08:22:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1245332' 00:30:22.830 killing process with pid 1245332 00:30:22.830 08:22:53 -- common/autotest_common.sh@945 -- # kill 1245332 00:30:22.830 Received shutdown signal, test time was about 2.000000 seconds 00:30:22.830 00:30:22.830 Latency(us) 00:30:22.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.830 =================================================================================================================== 00:30:22.830 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:22.830 08:22:53 -- common/autotest_common.sh@950 -- # wait 1245332 00:30:23.090 08:22:53 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:30:23.090 08:22:53 -- host/digest.sh@54 -- # local rw bs qd 00:30:23.090 08:22:53 -- host/digest.sh@56 -- # rw=randwrite 00:30:23.090 08:22:53 -- host/digest.sh@56 -- # bs=4096 00:30:23.090 08:22:53 -- host/digest.sh@56 -- # qd=128 00:30:23.090 08:22:53 -- host/digest.sh@58 -- # bperfpid=1246022 00:30:23.090 08:22:53 -- host/digest.sh@60 -- # waitforlisten 1246022 /var/tmp/bperf.sock 00:30:23.090 08:22:53 -- common/autotest_common.sh@819 -- # '[' -z 1246022 ']' 00:30:23.090 08:22:53 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:23.090 08:22:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:23.090 08:22:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:23.090 08:22:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:23.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:23.090 08:22:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:23.090 08:22:53 -- common/autotest_common.sh@10 -- # set +x 00:30:23.090 [2024-06-11 08:22:53.586919] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:23.090 [2024-06-11 08:22:53.586973] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246022 ] 00:30:23.090 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.090 [2024-06-11 08:22:53.664565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.090 [2024-06-11 08:22:53.716555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.031 08:22:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:24.031 08:22:54 -- common/autotest_common.sh@852 -- # return 0 00:30:24.031 08:22:54 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:24.031 08:22:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:24.031 08:22:54 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:24.031 08:22:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:24.031 08:22:54 -- common/autotest_common.sh@10 -- # set +x 00:30:24.031 08:22:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:24.031 08:22:54 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:24.031 08:22:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:24.291 nvme0n1 00:30:24.291 08:22:54 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:24.291 08:22:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:24.291 08:22:54 -- common/autotest_common.sh@10 -- # set +x 00:30:24.291 08:22:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:24.291 08:22:54 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:24.291 08:22:54 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:24.552 Running I/O for 2 seconds... 00:30:24.552 [2024-06-11 08:22:54.989608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190ec408 00:30:24.552 [2024-06-11 08:22:54.990427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.552 [2024-06-11 08:22:54.990459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:24.552 [2024-06-11 08:22:55.001154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e73e0 00:30:24.552 [2024-06-11 08:22:55.001512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.552 [2024-06-11 08:22:55.001530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:24.552 [2024-06-11 08:22:55.012585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190ef6a8 00:30:24.552 [2024-06-11 08:22:55.012747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.552 [2024-06-11 08:22:55.012763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:24.552 [2024-06-11 08:22:55.025687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f8618 00:30:24.552 [2024-06-11 08:22:55.026915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.552 [2024-06-11 08:22:55.026932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:24.552 [2024-06-11 08:22:55.035602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e84c0 00:30:24.552 [2024-06-11 08:22:55.036404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.552 [2024-06-11 08:22:55.036420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:24.552 [2024-06-11 08:22:55.047034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f5378 00:30:24.552 [2024-06-11 08:22:55.047863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.552 [2024-06-11 08:22:55.047879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:24.552 [2024-06-11 08:22:55.058588] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e7818 00:30:24.552 [2024-06-11 08:22:55.059453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.552 [2024-06-11 08:22:55.059469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:24.552 [2024-06-11 08:22:55.070255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190ef6a8 00:30:24.552 [2024-06-11 08:22:55.071182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.552 [2024-06-11 08:22:55.071198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:24.552 [2024-06-11 08:22:55.081684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190edd58 00:30:24.552 [2024-06-11 08:22:55.082156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.552 [2024-06-11 08:22:55.082172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:24.552 [2024-06-11 08:22:55.093058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f0788 00:30:24.552 [2024-06-11 08:22:55.093325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.552 [2024-06-11 08:22:55.093342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:24.552 [2024-06-11 08:22:55.104415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190ed4e8 00:30:24.552 [2024-06-11 08:22:55.104668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.552 [2024-06-11 08:22:55.104685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:24.552 [2024-06-11 08:22:55.115803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e1b48 00:30:24.552 [2024-06-11 08:22:55.116178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.552 [2024-06-11 08:22:55.116194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:24.552 [2024-06-11 08:22:55.127130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190ed4e8 00:30:24.552 [2024-06-11 08:22:55.127454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.553 [2024-06-11 08:22:55.127470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:24.553 [2024-06-11 08:22:55.140202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f4298 00:30:24.553 [2024-06-11 08:22:55.141528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.553 [2024-06-11 08:22:55.141543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:24.553 [2024-06-11 08:22:55.150095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f0788 00:30:24.553 [2024-06-11 08:22:55.151018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.553 [2024-06-11 08:22:55.151034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:24.553 [2024-06-11 08:22:55.161703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f0788 00:30:24.553 [2024-06-11 08:22:55.162746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.553 [2024-06-11 08:22:55.162765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:24.553 [2024-06-11 08:22:55.174389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f0ff8 00:30:24.553 [2024-06-11 08:22:55.175496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.553 [2024-06-11 08:22:55.175512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:24.553 [2024-06-11 08:22:55.184607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f4298 00:30:24.553 [2024-06-11 08:22:55.185396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.553 [2024-06-11 08:22:55.185412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:24.553 [2024-06-11 08:22:55.196062] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190ef6a8 00:30:24.553 [2024-06-11 08:22:55.196859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.553 [2024-06-11 08:22:55.196875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:24.814 [2024-06-11 08:22:55.207490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e9168 00:30:24.814 [2024-06-11 08:22:55.208282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.814 [2024-06-11 08:22:55.208298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:24.814 [2024-06-11 08:22:55.218963] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190fc560 00:30:24.814 [2024-06-11 08:22:55.219753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.814 [2024-06-11 08:22:55.219769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:24.814 [2024-06-11 08:22:55.230345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e6738 00:30:24.814 [2024-06-11 08:22:55.231136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.814 [2024-06-11 08:22:55.231151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:24.814 [2024-06-11 08:22:55.241804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e6b70 00:30:24.814 [2024-06-11 08:22:55.242610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.814 [2024-06-11 08:22:55.242625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:24.814 [2024-06-11 08:22:55.253188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e95a0 00:30:24.814 [2024-06-11 08:22:55.253958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.814 [2024-06-11 08:22:55.253974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:24.814 [2024-06-11 08:22:55.264655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190ea680 00:30:24.814 [2024-06-11 08:22:55.265455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.814 [2024-06-11 08:22:55.265471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:24.814 [2024-06-11 08:22:55.276050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190eb328 00:30:24.814 [2024-06-11 08:22:55.276803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.814 [2024-06-11 08:22:55.276820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:24.814 [2024-06-11 08:22:55.287540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190fe2e8 00:30:24.814 [2024-06-11 08:22:55.288333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.814 [2024-06-11 08:22:55.288349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:24.814 [2024-06-11 08:22:55.298940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190ebb98 00:30:24.814 [2024-06-11 08:22:55.299725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.814 [2024-06-11 08:22:55.299741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:24.814 [2024-06-11 08:22:55.310390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e1f80 00:30:24.814 [2024-06-11 08:22:55.311189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.814 [2024-06-11 08:22:55.311205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:24.814 [2024-06-11 08:22:55.321747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f31b8 00:30:24.814 [2024-06-11 08:22:55.322532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.814 [2024-06-11 08:22:55.322548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:24.814 [2024-06-11 08:22:55.333212] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190eee38 00:30:24.814 [2024-06-11 08:22:55.334012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.814 [2024-06-11 08:22:55.334028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:24.814 [2024-06-11 08:22:55.344596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190fb8b8 00:30:24.814 [2024-06-11 08:22:55.345382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.814 [2024-06-11 08:22:55.345398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:24.814 [2024-06-11 08:22:55.356078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e84c0 00:30:24.814 [2024-06-11 08:22:55.356875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.814 [2024-06-11 08:22:55.356891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:24.814 [2024-06-11 08:22:55.367472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e23b8 00:30:24.814 [2024-06-11 08:22:55.368254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.814 [2024-06-11 08:22:55.368270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:24.814 [2024-06-11 08:22:55.378917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f4f40 00:30:24.814 [2024-06-11 08:22:55.379712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.814 [2024-06-11 08:22:55.379728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:24.814 [2024-06-11 08:22:55.390309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190ecc78 00:30:24.814 [2024-06-11 08:22:55.391098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.815 [2024-06-11 08:22:55.391114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:24.815 [2024-06-11 08:22:55.403000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e95a0 00:30:24.815 [2024-06-11 08:22:55.404024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.815 [2024-06-11 08:22:55.404040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:24.815 [2024-06-11 08:22:55.414332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f20d8 00:30:24.815 [2024-06-11 08:22:55.415460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.815 [2024-06-11 08:22:55.415476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:24.815 [2024-06-11 08:22:55.424657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190fa7d8 00:30:24.815 [2024-06-11 08:22:55.425464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.815 [2024-06-11 08:22:55.425481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:24.815 [2024-06-11 08:22:55.435765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f1868 00:30:24.815 [2024-06-11 08:22:55.436805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.815 [2024-06-11 08:22:55.436821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:24.815 [2024-06-11 08:22:55.446306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f5378 00:30:24.815 [2024-06-11 08:22:55.446584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:24.815 [2024-06-11 08:22:55.446599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:24.815 [2024-06-11 08:22:55.458553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190edd58 00:30:25.076 [2024-06-11 08:22:55.459576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.076 [2024-06-11 08:22:55.459598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:25.076 [2024-06-11 08:22:55.470536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f2948 00:30:25.076 [2024-06-11 08:22:55.471308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.076 [2024-06-11 08:22:55.471323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:25.076 [2024-06-11 08:22:55.481458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190fb8b8 00:30:25.076 [2024-06-11 08:22:55.482486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.076 [2024-06-11 08:22:55.482502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:25.076 [2024-06-11 08:22:55.492855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f1868 00:30:25.076 [2024-06-11 08:22:55.493898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.076 [2024-06-11 08:22:55.493914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:25.076 [2024-06-11 08:22:55.504251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e6b70 00:30:25.076 [2024-06-11 08:22:55.505335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.076 [2024-06-11 08:22:55.505352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:25.076 [2024-06-11 08:22:55.515687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e95a0 00:30:25.076 [2024-06-11 08:22:55.516783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.076 [2024-06-11 08:22:55.516799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:25.076 [2024-06-11 08:22:55.527119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e1710 00:30:25.076 [2024-06-11 08:22:55.528228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.076 [2024-06-11 08:22:55.528243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.076 [2024-06-11 08:22:55.538526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e6300 00:30:25.076 [2024-06-11 08:22:55.539685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.076 [2024-06-11 08:22:55.539701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:25.076 [2024-06-11 08:22:55.549901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e9168 00:30:25.076 [2024-06-11 08:22:55.550798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.076 [2024-06-11 08:22:55.550815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.076 [2024-06-11 08:22:55.561293] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190edd58 00:30:25.076 [2024-06-11 08:22:55.562339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.076 [2024-06-11 08:22:55.562357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:25.076 [2024-06-11 08:22:55.572703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e0a68 00:30:25.076 [2024-06-11 08:22:55.573767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.076 [2024-06-11 08:22:55.573783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:25.076 [2024-06-11 08:22:55.584135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f1868 00:30:25.076 [2024-06-11 08:22:55.585219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.076 [2024-06-11 08:22:55.585234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:25.076 [2024-06-11 08:22:55.595512] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f6020 00:30:25.076 [2024-06-11 08:22:55.596612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.076 [2024-06-11 08:22:55.596627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:25.076 [2024-06-11 08:22:55.606926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f0350 00:30:25.076 [2024-06-11 08:22:55.608034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.076 [2024-06-11 08:22:55.608050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:25.076 [2024-06-11 08:22:55.618420] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f4b08 00:30:25.076 [2024-06-11 08:22:55.619373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.076 [2024-06-11 08:22:55.619390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:25.076 [2024-06-11 08:22:55.629913] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e7c50 00:30:25.076 [2024-06-11 08:22:55.630235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.076 [2024-06-11 08:22:55.630252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:25.076 [2024-06-11 08:22:55.641150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f4b08 00:30:25.076 [2024-06-11 08:22:55.641649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.076 [2024-06-11 08:22:55.641665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:25.076 [2024-06-11 08:22:55.652643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f20d8 00:30:25.076 [2024-06-11 08:22:55.652973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.077 [2024-06-11 08:22:55.652989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:25.077 [2024-06-11 08:22:55.665427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f31b8 00:30:25.077 [2024-06-11 08:22:55.666496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.077 [2024-06-11 08:22:55.666511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:25.077 [2024-06-11 08:22:55.675961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f0bc0 00:30:25.077 [2024-06-11 08:22:55.676756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.077 [2024-06-11 08:22:55.676773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.077 [2024-06-11 08:22:55.686784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190fc128 00:30:25.077 [2024-06-11 08:22:55.687776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.077 [2024-06-11 08:22:55.687791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:25.077 [2024-06-11 08:22:55.698420] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f46d0 00:30:25.077 [2024-06-11 08:22:55.699481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.077 [2024-06-11 08:22:55.699497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:25.077 [2024-06-11 08:22:55.710090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f7da8 00:30:25.077 [2024-06-11 08:22:55.710610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.077 [2024-06-11 08:22:55.710626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:25.338 [2024-06-11 08:22:55.723013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190fef90 00:30:25.338 [2024-06-11 08:22:55.724137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.338 [2024-06-11 08:22:55.724153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:25.338 [2024-06-11 08:22:55.733722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f1ca0 00:30:25.338 [2024-06-11 08:22:55.734704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.338 [2024-06-11 08:22:55.734721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:25.338 [2024-06-11 08:22:55.744505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190fcdd0 00:30:25.338 [2024-06-11 08:22:55.745601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.338 [2024-06-11 08:22:55.745616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:25.338 [2024-06-11 08:22:55.755941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e27f0 00:30:25.338 [2024-06-11 08:22:55.757138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.338 [2024-06-11 08:22:55.757158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.338 [2024-06-11 08:22:55.767938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e4de8 00:30:25.338 [2024-06-11 08:22:55.768884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.338 [2024-06-11 08:22:55.768900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:25.338 [2024-06-11 08:22:55.778797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e6300 00:30:25.338 [2024-06-11 08:22:55.779907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.338 [2024-06-11 08:22:55.779923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.338 [2024-06-11 08:22:55.789576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190ebb98 00:30:25.338 [2024-06-11 08:22:55.790213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.338 [2024-06-11 08:22:55.790229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:25.338 [2024-06-11 08:22:55.801719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190ef6a8 00:30:25.338 [2024-06-11 08:22:55.802344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.338 [2024-06-11 08:22:55.802360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:25.338 [2024-06-11 08:22:55.814677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f46d0 00:30:25.338 [2024-06-11 08:22:55.815979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.338 [2024-06-11 08:22:55.815995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:25.338 [2024-06-11 08:22:55.824538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f6020 00:30:25.338 [2024-06-11 08:22:55.825654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.338 [2024-06-11 08:22:55.825670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:25.338 [2024-06-11 08:22:55.836025] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e5220 00:30:25.338 [2024-06-11 08:22:55.837174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.338 [2024-06-11 08:22:55.837191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:25.338 [2024-06-11 08:22:55.848948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190fd208 00:30:25.338 [2024-06-11 08:22:55.850272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.338 [2024-06-11 08:22:55.850288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:25.338 [2024-06-11 08:22:55.858775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e5ec8 00:30:25.338 [2024-06-11 08:22:55.859843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.338 [2024-06-11 08:22:55.859862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:25.338 [2024-06-11 08:22:55.870259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190fc998 00:30:25.338 [2024-06-11 08:22:55.871396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.338 [2024-06-11 08:22:55.871412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:25.338 [2024-06-11 08:22:55.882043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f57b0 00:30:25.338 [2024-06-11 08:22:55.882870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.338 [2024-06-11 08:22:55.882886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:25.338 [2024-06-11 08:22:55.893324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190fb048 00:30:25.338 [2024-06-11 08:22:55.894616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.338 [2024-06-11 08:22:55.894632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:25.338 [2024-06-11 08:22:55.903823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190feb58 00:30:25.338 [2024-06-11 08:22:55.904366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.338 [2024-06-11 08:22:55.904381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:25.338 [2024-06-11 08:22:55.915161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190eaef0 00:30:25.338 [2024-06-11 08:22:55.916000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.338 [2024-06-11 08:22:55.916016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:25.338 [2024-06-11 08:22:55.927221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190fe720 00:30:25.338 [2024-06-11 08:22:55.928348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.338 [2024-06-11 08:22:55.928365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:25.338 [2024-06-11 08:22:55.940128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190fa7d8 00:30:25.338 [2024-06-11 08:22:55.941347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.338 [2024-06-11 08:22:55.941363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:25.338 [2024-06-11 08:22:55.950863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e5658 00:30:25.339 [2024-06-11 08:22:55.951885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.339 [2024-06-11 08:22:55.951902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:25.339 [2024-06-11 08:22:55.961609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190fd640 00:30:25.339 [2024-06-11 08:22:55.962703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.339 [2024-06-11 08:22:55.962720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:25.339 [2024-06-11 08:22:55.971488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190fcdd0 00:30:25.339 [2024-06-11 08:22:55.971850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.339 [2024-06-11 08:22:55.971865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:25.599 [2024-06-11 08:22:55.983510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e2c28 00:30:25.599 [2024-06-11 08:22:55.984359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.599 [2024-06-11 08:22:55.984375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:25.599 [2024-06-11 08:22:55.994896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e6fa8 00:30:25.599 [2024-06-11 08:22:55.995940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.599 [2024-06-11 08:22:55.995957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:25.599 [2024-06-11 08:22:56.007806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f31b8 00:30:25.599 [2024-06-11 08:22:56.008679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.599 [2024-06-11 08:22:56.008695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:25.599 [2024-06-11 08:22:56.017688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190fe720 00:30:25.599 [2024-06-11 08:22:56.018473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.599 [2024-06-11 08:22:56.018489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:25.599 [2024-06-11 08:22:56.030546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190ea680 00:30:25.599 [2024-06-11 08:22:56.031980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.599 [2024-06-11 08:22:56.031996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:25.599 [2024-06-11 08:22:56.041889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f31b8 00:30:25.599 [2024-06-11 08:22:56.043320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.599 [2024-06-11 08:22:56.043336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:25.599 [2024-06-11 08:22:56.053231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190eaab8 00:30:25.599 [2024-06-11 08:22:56.054654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.599 [2024-06-11 08:22:56.054671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:25.599 [2024-06-11 08:22:56.064679] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e5220 00:30:25.599 [2024-06-11 08:22:56.066076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.599 [2024-06-11 08:22:56.066092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:25.599 [2024-06-11 08:22:56.076043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e4578 00:30:25.599 [2024-06-11 08:22:56.077441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.600 [2024-06-11 08:22:56.077457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:25.600 [2024-06-11 08:22:56.087393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f46d0 00:30:25.600 [2024-06-11 08:22:56.088742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.600 [2024-06-11 08:22:56.088758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:25.600 [2024-06-11 08:22:56.098736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190ea248 00:30:25.600 [2024-06-11 08:22:56.100092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.600 [2024-06-11 08:22:56.100108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:25.600 [2024-06-11 08:22:56.110078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e3498 00:30:25.600 [2024-06-11 08:22:56.111429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.600 [2024-06-11 08:22:56.111449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:25.600 [2024-06-11 08:22:56.121459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e12d8 00:30:25.600 [2024-06-11 08:22:56.122801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.600 [2024-06-11 08:22:56.122817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:25.600 [2024-06-11 08:22:56.132782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e5658 00:30:25.600 [2024-06-11 08:22:56.134105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.600 [2024-06-11 08:22:56.134121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:25.600 [2024-06-11 08:22:56.143779] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190ea248 00:30:25.600 [2024-06-11 08:22:56.144729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.600 [2024-06-11 08:22:56.144745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:25.600 [2024-06-11 08:22:56.155133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190fe720 00:30:25.600 [2024-06-11 08:22:56.156074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.600 [2024-06-11 08:22:56.156093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:25.600 [2024-06-11 08:22:56.165509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e9e10 00:30:25.600 [2024-06-11 08:22:56.166016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.600 [2024-06-11 08:22:56.166031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:25.600 [2024-06-11 08:22:56.178590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190fb8b8 00:30:25.600 [2024-06-11 08:22:56.179246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.600 [2024-06-11 08:22:56.179262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.600 [2024-06-11 08:22:56.189978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e95a0 00:30:25.600 [2024-06-11 08:22:56.190629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.600 [2024-06-11 08:22:56.190645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:25.600 [2024-06-11 08:22:56.201432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e3060 00:30:25.600 [2024-06-11 08:22:56.202075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.600 [2024-06-11 08:22:56.202092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:25.600 [2024-06-11 08:22:56.212877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f5378 00:30:25.600 [2024-06-11 08:22:56.213512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.600 [2024-06-11 08:22:56.213528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:25.600 [2024-06-11 08:22:56.223692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190df988 00:30:25.600 [2024-06-11 08:22:56.224757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.600 [2024-06-11 08:22:56.224774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:25.600 [2024-06-11 08:22:56.236501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f1430 00:30:25.600 [2024-06-11 08:22:56.237456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.600 [2024-06-11 08:22:56.237472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:25.861 [2024-06-11 08:22:56.247793] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190eb328 00:30:25.861 [2024-06-11 08:22:56.249287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.861 [2024-06-11 08:22:56.249304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:25.861 [2024-06-11 08:22:56.259145] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190ef6a8 00:30:25.861 [2024-06-11 08:22:56.260651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.861 [2024-06-11 08:22:56.260668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:25.861 [2024-06-11 08:22:56.270497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f81e0 00:30:25.861 [2024-06-11 08:22:56.271977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.861 [2024-06-11 08:22:56.271993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:25.861 [2024-06-11 08:22:56.281227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190edd58 00:30:25.861 [2024-06-11 08:22:56.282286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.861 [2024-06-11 08:22:56.282302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:25.861 [2024-06-11 08:22:56.291087] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e6738 00:30:25.861 [2024-06-11 08:22:56.291755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.861 [2024-06-11 08:22:56.291772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:25.861 [2024-06-11 08:22:56.302513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f0350 00:30:25.861 [2024-06-11 08:22:56.303207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.861 [2024-06-11 08:22:56.303223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:25.861 [2024-06-11 08:22:56.313947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f1430 00:30:25.861 [2024-06-11 08:22:56.314670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.861 [2024-06-11 08:22:56.314686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:25.861 [2024-06-11 08:22:56.325998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f6020 00:30:25.861 [2024-06-11 08:22:56.327161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.861 [2024-06-11 08:22:56.327178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:25.861 [2024-06-11 08:22:56.339304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190eff18 00:30:25.861 [2024-06-11 08:22:56.340689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.861 [2024-06-11 08:22:56.340706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:25.861 [2024-06-11 08:22:56.349111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e7818 00:30:25.861 [2024-06-11 08:22:56.349987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.861 [2024-06-11 08:22:56.350003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:25.861 [2024-06-11 08:22:56.360559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f3a28 00:30:25.861 [2024-06-11 08:22:56.361727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.861 [2024-06-11 08:22:56.361743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:25.861 [2024-06-11 08:22:56.372017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e73e0 00:30:25.861 [2024-06-11 08:22:56.372953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.861 [2024-06-11 08:22:56.372969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:25.861 [2024-06-11 08:22:56.383338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f1ca0 00:30:25.861 [2024-06-11 08:22:56.384242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.861 [2024-06-11 08:22:56.384257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:25.861 [2024-06-11 08:22:56.396209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190fc998 00:30:25.861 [2024-06-11 08:22:56.397723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.861 [2024-06-11 08:22:56.397739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:25.861 [2024-06-11 08:22:56.406968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190edd58 00:30:25.861 [2024-06-11 08:22:56.408117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.861 [2024-06-11 08:22:56.408133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:25.862 [2024-06-11 08:22:56.416789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190fc560 00:30:25.862 [2024-06-11 08:22:56.417487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.862 [2024-06-11 08:22:56.417502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:25.862 [2024-06-11 08:22:56.430333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190ea680 00:30:25.862 [2024-06-11 08:22:56.431281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.862 [2024-06-11 08:22:56.431296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:25.862 [2024-06-11 08:22:56.440284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190ec840 00:30:25.862 [2024-06-11 08:22:56.441197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.862 [2024-06-11 08:22:56.441213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:25.862 [2024-06-11 08:22:56.451684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e95a0 00:30:25.862 [2024-06-11 08:22:56.452515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.862 [2024-06-11 08:22:56.452534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.862 [2024-06-11 08:22:56.463054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e88f8 00:30:25.862 [2024-06-11 08:22:56.463979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.862 [2024-06-11 08:22:56.463995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.862 [2024-06-11 08:22:56.475919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190dece0 00:30:25.862 [2024-06-11 08:22:56.477489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.862 [2024-06-11 08:22:56.477504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:25.862 [2024-06-11 08:22:56.487256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e49b0 00:30:25.862 [2024-06-11 08:22:56.488815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.862 [2024-06-11 08:22:56.488831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.862 [2024-06-11 08:22:56.498252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f4298 00:30:25.862 [2024-06-11 08:22:56.499450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.862 [2024-06-11 08:22:56.499466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.508583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190ef6a8 00:30:26.123 [2024-06-11 08:22:56.509306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.509322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.519722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f4b08 00:30:26.123 [2024-06-11 08:22:56.520664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.520679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.531096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f0bc0 00:30:26.123 [2024-06-11 08:22:56.532020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.532036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.542448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e01f8 00:30:26.123 [2024-06-11 08:22:56.543397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.543412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.553758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f4f40 00:30:26.123 [2024-06-11 08:22:56.554707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.554723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.565052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f2510 00:30:26.123 [2024-06-11 08:22:56.565952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.565969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.576373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190eaab8 00:30:26.123 [2024-06-11 08:22:56.577303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.577319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.587696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e1b48 00:30:26.123 [2024-06-11 08:22:56.588606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.588622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.599023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f31b8 00:30:26.123 [2024-06-11 08:22:56.599925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.599941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.610402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f8a50 00:30:26.123 [2024-06-11 08:22:56.611321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.611336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.622010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f2948 00:30:26.123 [2024-06-11 08:22:56.622797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.622813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.633434] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190df988 00:30:26.123 [2024-06-11 08:22:56.633663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.633678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.644828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f46d0 00:30:26.123 [2024-06-11 08:22:56.645080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.645097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.656192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190edd58 00:30:26.123 [2024-06-11 08:22:56.656454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.656469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.667598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f20d8 00:30:26.123 [2024-06-11 08:22:56.667828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.667843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.678973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190de8a8 00:30:26.123 [2024-06-11 08:22:56.679176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.679191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.690318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e4de8 00:30:26.123 [2024-06-11 08:22:56.690511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.690526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.701685] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f0bc0 00:30:26.123 [2024-06-11 08:22:56.701926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.701943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.713207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190eaab8 00:30:26.123 [2024-06-11 08:22:56.713399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.713414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.724550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e4578 00:30:26.123 [2024-06-11 08:22:56.724755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.724770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.735919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190eee38 00:30:26.123 [2024-06-11 08:22:56.736127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.736142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.748789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e4578 00:30:26.123 [2024-06-11 08:22:56.749646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.749665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:26.123 [2024-06-11 08:22:56.759325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f3a28 00:30:26.123 [2024-06-11 08:22:56.759904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.123 [2024-06-11 08:22:56.759921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:26.385 [2024-06-11 08:22:56.770163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e3498 00:30:26.385 [2024-06-11 08:22:56.770935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.385 [2024-06-11 08:22:56.770952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:26.385 [2024-06-11 08:22:56.781565] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190de038 00:30:26.385 [2024-06-11 08:22:56.782360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.385 [2024-06-11 08:22:56.782376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:26.385 [2024-06-11 08:22:56.792358] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190ec840 00:30:26.385 [2024-06-11 08:22:56.792707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.385 [2024-06-11 08:22:56.792722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:26.385 [2024-06-11 08:22:56.804405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190de8a8 00:30:26.385 [2024-06-11 08:22:56.805243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.385 [2024-06-11 08:22:56.805259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:26.385 [2024-06-11 08:22:56.815777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f7538 00:30:26.385 [2024-06-11 08:22:56.816015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.385 [2024-06-11 08:22:56.816031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:26.385 [2024-06-11 08:22:56.827159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f7da8 00:30:26.385 [2024-06-11 08:22:56.827381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.385 [2024-06-11 08:22:56.827396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:26.385 [2024-06-11 08:22:56.838519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190fa3a0 00:30:26.385 [2024-06-11 08:22:56.838757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.385 [2024-06-11 08:22:56.838773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:26.385 [2024-06-11 08:22:56.849908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190edd58 00:30:26.385 [2024-06-11 08:22:56.850131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.385 [2024-06-11 08:22:56.850147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:26.385 [2024-06-11 08:22:56.861290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190dfdc0 00:30:26.385 [2024-06-11 08:22:56.861528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.386 [2024-06-11 08:22:56.861543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:26.386 [2024-06-11 08:22:56.872699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e6300 00:30:26.386 [2024-06-11 08:22:56.872908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.386 [2024-06-11 08:22:56.872923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:26.386 [2024-06-11 08:22:56.885782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e01f8 00:30:26.386 [2024-06-11 08:22:56.886957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.386 [2024-06-11 08:22:56.886974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:26.386 [2024-06-11 08:22:56.895676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f9b30 00:30:26.386 [2024-06-11 08:22:56.896054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.386 [2024-06-11 08:22:56.896070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:26.386 [2024-06-11 08:22:56.906934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190eff18 00:30:26.386 [2024-06-11 08:22:56.907864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.386 [2024-06-11 08:22:56.907880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:26.386 [2024-06-11 08:22:56.917803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e5220 00:30:26.386 [2024-06-11 08:22:56.918143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.386 [2024-06-11 08:22:56.918158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:26.386 [2024-06-11 08:22:56.929290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190ee5c8 00:30:26.386 [2024-06-11 08:22:56.929935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.386 [2024-06-11 08:22:56.929951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:26.386 [2024-06-11 08:22:56.941345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190de038 00:30:26.386 [2024-06-11 08:22:56.942268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.386 [2024-06-11 08:22:56.942285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:26.386 [2024-06-11 08:22:56.952773] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190e95a0 00:30:26.386 [2024-06-11 08:22:56.953155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.386 [2024-06-11 08:22:56.953171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:26.386 [2024-06-11 08:22:56.964172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190ea248 00:30:26.386 [2024-06-11 08:22:56.964530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.386 [2024-06-11 08:22:56.964549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:26.386 [2024-06-11 08:22:56.975618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234dea0) with pdu=0x2000190f2510 00:30:26.386 [2024-06-11 08:22:56.975938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:26.386 [2024-06-11 08:22:56.975954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:26.386 00:30:26.386 Latency(us) 00:30:26.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.386 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:26.386 nvme0n1 : 2.00 22289.08 87.07 0.00 0.00 5735.39 3549.87 16165.55 00:30:26.386 =================================================================================================================== 00:30:26.386 Total : 22289.08 87.07 0.00 0.00 5735.39 3549.87 16165.55 00:30:26.386 0 00:30:26.386 08:22:57 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:26.386 08:22:57 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:26.386 08:22:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:26.386 08:22:57 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:26.386 | .driver_specific 00:30:26.386 | .nvme_error 00:30:26.386 | .status_code 00:30:26.386 | .command_transient_transport_error' 00:30:26.647 08:22:57 -- host/digest.sh@71 -- # (( 175 > 0 )) 00:30:26.647 08:22:57 -- host/digest.sh@73 -- # killprocess 1246022 00:30:26.647 08:22:57 -- common/autotest_common.sh@926 -- # '[' -z 1246022 ']' 00:30:26.647 08:22:57 -- common/autotest_common.sh@930 -- # kill -0 1246022 00:30:26.647 08:22:57 -- common/autotest_common.sh@931 -- # uname 00:30:26.647 08:22:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:26.647 08:22:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1246022 00:30:26.647 08:22:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:26.647 08:22:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:26.647 08:22:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1246022' 00:30:26.647 killing process with pid 1246022 00:30:26.647 08:22:57 -- common/autotest_common.sh@945 -- # kill 1246022 00:30:26.647 Received shutdown signal, test time was about 2.000000 seconds 00:30:26.647 00:30:26.647 Latency(us) 00:30:26.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.647 =================================================================================================================== 00:30:26.647 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:26.647 08:22:57 -- common/autotest_common.sh@950 -- # wait 1246022 00:30:26.907 08:22:57 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:30:26.907 08:22:57 -- host/digest.sh@54 -- # local rw bs qd 00:30:26.907 08:22:57 -- host/digest.sh@56 -- # rw=randwrite 00:30:26.907 08:22:57 -- host/digest.sh@56 -- # bs=131072 00:30:26.907 08:22:57 -- host/digest.sh@56 -- # qd=16 00:30:26.907 08:22:57 -- host/digest.sh@58 -- # bperfpid=1246716 00:30:26.907 08:22:57 -- host/digest.sh@60 -- # waitforlisten 1246716 /var/tmp/bperf.sock 00:30:26.907 08:22:57 -- common/autotest_common.sh@819 -- # '[' -z 1246716 ']' 00:30:26.907 08:22:57 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:26.907 08:22:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:26.907 08:22:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:26.907 08:22:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:26.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:26.908 08:22:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:26.908 08:22:57 -- common/autotest_common.sh@10 -- # set +x 00:30:26.908 [2024-06-11 08:22:57.369956] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:26.908 [2024-06-11 08:22:57.370009] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246716 ] 00:30:26.908 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:26.908 Zero copy mechanism will not be used. 00:30:26.908 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.908 [2024-06-11 08:22:57.445773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.908 [2024-06-11 08:22:57.496069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.480 08:22:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:27.480 08:22:58 -- common/autotest_common.sh@852 -- # return 0 00:30:27.480 08:22:58 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:27.741 08:22:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:27.741 08:22:58 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:27.741 08:22:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:27.741 08:22:58 -- common/autotest_common.sh@10 -- # set +x 00:30:27.741 08:22:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:27.741 08:22:58 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:27.741 08:22:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:28.001 nvme0n1 00:30:28.001 08:22:58 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:28.001 08:22:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:28.001 08:22:58 -- common/autotest_common.sh@10 -- # set +x 00:30:28.001 08:22:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:28.001 08:22:58 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:28.001 08:22:58 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:28.263 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:28.263 Zero copy mechanism will not be used. 00:30:28.263 Running I/O for 2 seconds... 00:30:28.263 [2024-06-11 08:22:58.713559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.263 [2024-06-11 08:22:58.713845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.263 [2024-06-11 08:22:58.713871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.263 [2024-06-11 08:22:58.720591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.263 [2024-06-11 08:22:58.720661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.263 [2024-06-11 08:22:58.720682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.263 [2024-06-11 08:22:58.726461] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.263 [2024-06-11 08:22:58.726709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.263 [2024-06-11 08:22:58.726726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.263 [2024-06-11 08:22:58.733412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.263 [2024-06-11 08:22:58.733697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.263 [2024-06-11 08:22:58.733715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.263 [2024-06-11 08:22:58.740684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.263 [2024-06-11 08:22:58.740905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.263 [2024-06-11 08:22:58.740921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.263 [2024-06-11 08:22:58.747924] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.263 [2024-06-11 08:22:58.747993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.263 [2024-06-11 08:22:58.748009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.263 [2024-06-11 08:22:58.755193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.263 [2024-06-11 08:22:58.755290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.263 [2024-06-11 08:22:58.755306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.263 [2024-06-11 08:22:58.762253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.263 [2024-06-11 08:22:58.762334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.263 [2024-06-11 08:22:58.762350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.263 [2024-06-11 08:22:58.768655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.263 [2024-06-11 08:22:58.768956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.263 [2024-06-11 08:22:58.768974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.263 [2024-06-11 08:22:58.775970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.263 [2024-06-11 08:22:58.776283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.263 [2024-06-11 08:22:58.776300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.263 [2024-06-11 08:22:58.784725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.263 [2024-06-11 08:22:58.784858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.263 [2024-06-11 08:22:58.784874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.264 [2024-06-11 08:22:58.790519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.264 [2024-06-11 08:22:58.790594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.264 [2024-06-11 08:22:58.790610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.264 [2024-06-11 08:22:58.797515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.264 [2024-06-11 08:22:58.797587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.264 [2024-06-11 08:22:58.797601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.264 [2024-06-11 08:22:58.804900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.264 [2024-06-11 08:22:58.804969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.264 [2024-06-11 08:22:58.804985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.264 [2024-06-11 08:22:58.810416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.264 [2024-06-11 08:22:58.810547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.264 [2024-06-11 08:22:58.810563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.264 [2024-06-11 08:22:58.817337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.264 [2024-06-11 08:22:58.817601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.264 [2024-06-11 08:22:58.817618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.264 [2024-06-11 08:22:58.827098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.264 [2024-06-11 08:22:58.827303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.264 [2024-06-11 08:22:58.827318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.264 [2024-06-11 08:22:58.836376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.264 [2024-06-11 08:22:58.836638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.264 [2024-06-11 08:22:58.836655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.264 [2024-06-11 08:22:58.847200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.264 [2024-06-11 08:22:58.847491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.264 [2024-06-11 08:22:58.847508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.264 [2024-06-11 08:22:58.857380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.264 [2024-06-11 08:22:58.857589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.264 [2024-06-11 08:22:58.857605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.264 [2024-06-11 08:22:58.866909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.264 [2024-06-11 08:22:58.867195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.264 [2024-06-11 08:22:58.867211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.264 [2024-06-11 08:22:58.877486] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.264 [2024-06-11 08:22:58.877705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.264 [2024-06-11 08:22:58.877720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.264 [2024-06-11 08:22:58.886546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.264 [2024-06-11 08:22:58.886825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.264 [2024-06-11 08:22:58.886842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.264 [2024-06-11 08:22:58.897611] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.264 [2024-06-11 08:22:58.897862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.264 [2024-06-11 08:22:58.897879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.264 [2024-06-11 08:22:58.908536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:58.908850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:58.908867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:58.918959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:58.919236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:58.919253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:58.928218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:58.928525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:58.928541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:58.936696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:58.936987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:58.937007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:58.942322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:58.942607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:58.942623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:58.946276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:58.946341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:58.946356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:58.949780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:58.949850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:58.949865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:58.953198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:58.953305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:58.953322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:58.956568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:58.956689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:58.956704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:58.959866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:58.959942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:58.959957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:58.963246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:58.963323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:58.963337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:58.967448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:58.967553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:58.967569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:58.970871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:58.970946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:58.970961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:58.974207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:58.974310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:58.974326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:58.979174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:58.979432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:58.979454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:58.986018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:58.986087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:58.986103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:58.992214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:58.992478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:58.992495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:58.997956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:58.998161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:58.998176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:59.004529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:59.004645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:59.004661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:59.009008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:59.009105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:59.009119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:59.013646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:59.013716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:59.013734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:59.019001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:59.019074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:59.019089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:59.023230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:59.023339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:59.023354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:59.027683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:59.027759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:59.027774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:59.031577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:59.031871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.527 [2024-06-11 08:22:59.031888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.527 [2024-06-11 08:22:59.035036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.527 [2024-06-11 08:22:59.035146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.528 [2024-06-11 08:22:59.035162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.528 [2024-06-11 08:22:59.038375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.528 [2024-06-11 08:22:59.038478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.528 [2024-06-11 08:22:59.038494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.528 [2024-06-11 08:22:59.041706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.528 [2024-06-11 08:22:59.041783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.528 [2024-06-11 08:22:59.041798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.528 [2024-06-11 08:22:59.045042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.528 [2024-06-11 08:22:59.045121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.528 [2024-06-11 08:22:59.045136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.528 [2024-06-11 08:22:59.048587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.528 [2024-06-11 08:22:59.048674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.528 [2024-06-11 08:22:59.048689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.528 [2024-06-11 08:22:59.053683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.528 [2024-06-11 08:22:59.053860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.528 [2024-06-11 08:22:59.053876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.528 [2024-06-11 08:22:59.063449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.528 [2024-06-11 08:22:59.063656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.528 [2024-06-11 08:22:59.063672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.528 [2024-06-11 08:22:59.074155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.528 [2024-06-11 08:22:59.074244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.528 [2024-06-11 08:22:59.074259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.528 [2024-06-11 08:22:59.084642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.528 [2024-06-11 08:22:59.084863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.528 [2024-06-11 08:22:59.084879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.528 [2024-06-11 08:22:59.095745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.528 [2024-06-11 08:22:59.095862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.528 [2024-06-11 08:22:59.095878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.528 [2024-06-11 08:22:59.106906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.528 [2024-06-11 08:22:59.107228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.528 [2024-06-11 08:22:59.107245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.528 [2024-06-11 08:22:59.118037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.528 [2024-06-11 08:22:59.118328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.528 [2024-06-11 08:22:59.118345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.528 [2024-06-11 08:22:59.128382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.528 [2024-06-11 08:22:59.128798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.528 [2024-06-11 08:22:59.128815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.528 [2024-06-11 08:22:59.139233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.528 [2024-06-11 08:22:59.139519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.528 [2024-06-11 08:22:59.139537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.528 [2024-06-11 08:22:59.149839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.528 [2024-06-11 08:22:59.150097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.528 [2024-06-11 08:22:59.150113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.528 [2024-06-11 08:22:59.160280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.528 [2024-06-11 08:22:59.160481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.528 [2024-06-11 08:22:59.160497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.528 [2024-06-11 08:22:59.171195] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.528 [2024-06-11 08:22:59.171493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.528 [2024-06-11 08:22:59.171509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.181775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.182046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.182063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.192141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.192422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.192444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.202711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.202987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.203005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.211751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.212038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.212054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.218493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.218708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.218726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.222848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.222919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.222934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.226373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.226508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.226523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.230096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.230241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.230256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.238267] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.238375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.238390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.248519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.248797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.248814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.258991] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.259310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.259326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.269728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.270016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.270033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.280372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.280596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.280612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.291357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.291611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.291628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.302452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.302699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.302715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.313320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.313468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.313483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.324471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.324754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.324770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.335639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.335910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.335927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.346557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.346801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.346818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.357332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.357568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.357583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.367178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.367259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.367274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.377682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.377957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.377974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.387505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.387778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.387795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.398125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.398386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.790 [2024-06-11 08:22:59.398403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:28.790 [2024-06-11 08:22:59.409411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.790 [2024-06-11 08:22:59.409743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.791 [2024-06-11 08:22:59.409759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:28.791 [2024-06-11 08:22:59.420407] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.791 [2024-06-11 08:22:59.420704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.791 [2024-06-11 08:22:59.420721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.791 [2024-06-11 08:22:59.430713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:28.791 [2024-06-11 08:22:59.430994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.791 [2024-06-11 08:22:59.431010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.053 [2024-06-11 08:22:59.441567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.053 [2024-06-11 08:22:59.441809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.053 [2024-06-11 08:22:59.441825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.053 [2024-06-11 08:22:59.451848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.053 [2024-06-11 08:22:59.452112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.053 [2024-06-11 08:22:59.452129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.053 [2024-06-11 08:22:59.463024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.053 [2024-06-11 08:22:59.463311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.053 [2024-06-11 08:22:59.463327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.053 [2024-06-11 08:22:59.473636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.053 [2024-06-11 08:22:59.473957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.053 [2024-06-11 08:22:59.473976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.053 [2024-06-11 08:22:59.483394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.053 [2024-06-11 08:22:59.483456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.053 [2024-06-11 08:22:59.483472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.053 [2024-06-11 08:22:59.493206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.053 [2024-06-11 08:22:59.493456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.053 [2024-06-11 08:22:59.493472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.053 [2024-06-11 08:22:59.503607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.053 [2024-06-11 08:22:59.503845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.053 [2024-06-11 08:22:59.503861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.053 [2024-06-11 08:22:59.511728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.053 [2024-06-11 08:22:59.511960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.053 [2024-06-11 08:22:59.511976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.053 [2024-06-11 08:22:59.519614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.053 [2024-06-11 08:22:59.519940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.053 [2024-06-11 08:22:59.519957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.053 [2024-06-11 08:22:59.527420] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.053 [2024-06-11 08:22:59.527686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.053 [2024-06-11 08:22:59.527703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.053 [2024-06-11 08:22:59.535832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.053 [2024-06-11 08:22:59.535886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.053 [2024-06-11 08:22:59.535901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.053 [2024-06-11 08:22:59.544821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.053 [2024-06-11 08:22:59.544926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.053 [2024-06-11 08:22:59.544942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.053 [2024-06-11 08:22:59.552394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.053 [2024-06-11 08:22:59.552677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.053 [2024-06-11 08:22:59.552693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.053 [2024-06-11 08:22:59.558864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.053 [2024-06-11 08:22:59.559131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.053 [2024-06-11 08:22:59.559148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.053 [2024-06-11 08:22:59.566601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.053 [2024-06-11 08:22:59.566807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.053 [2024-06-11 08:22:59.566822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.053 [2024-06-11 08:22:59.573458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.053 [2024-06-11 08:22:59.573715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.053 [2024-06-11 08:22:59.573730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.053 [2024-06-11 08:22:59.581641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.053 [2024-06-11 08:22:59.581753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.053 [2024-06-11 08:22:59.581769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.053 [2024-06-11 08:22:59.587523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.053 [2024-06-11 08:22:59.587589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.053 [2024-06-11 08:22:59.587604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.053 [2024-06-11 08:22:59.594502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.053 [2024-06-11 08:22:59.594766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.053 [2024-06-11 08:22:59.594784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.053 [2024-06-11 08:22:59.603559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.053 [2024-06-11 08:22:59.603855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.053 [2024-06-11 08:22:59.603871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.053 [2024-06-11 08:22:59.611141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.053 [2024-06-11 08:22:59.611194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.053 [2024-06-11 08:22:59.611209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.053 [2024-06-11 08:22:59.619399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.053 [2024-06-11 08:22:59.619594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.053 [2024-06-11 08:22:59.619609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.054 [2024-06-11 08:22:59.623910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.054 [2024-06-11 08:22:59.623974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.054 [2024-06-11 08:22:59.623989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.054 [2024-06-11 08:22:59.628700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.054 [2024-06-11 08:22:59.628977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.054 [2024-06-11 08:22:59.628993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.054 [2024-06-11 08:22:59.633421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.054 [2024-06-11 08:22:59.633533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.054 [2024-06-11 08:22:59.633548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.054 [2024-06-11 08:22:59.640447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.054 [2024-06-11 08:22:59.640533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.054 [2024-06-11 08:22:59.640547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.054 [2024-06-11 08:22:59.644943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.054 [2024-06-11 08:22:59.645093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.054 [2024-06-11 08:22:59.645108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.054 [2024-06-11 08:22:59.652042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.054 [2024-06-11 08:22:59.652310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.054 [2024-06-11 08:22:59.652326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.054 [2024-06-11 08:22:59.657079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.054 [2024-06-11 08:22:59.657145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.054 [2024-06-11 08:22:59.657160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.054 [2024-06-11 08:22:59.663594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.054 [2024-06-11 08:22:59.663665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.054 [2024-06-11 08:22:59.663683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.054 [2024-06-11 08:22:59.669141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.054 [2024-06-11 08:22:59.669243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.054 [2024-06-11 08:22:59.669258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.054 [2024-06-11 08:22:59.672959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.054 [2024-06-11 08:22:59.673047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.054 [2024-06-11 08:22:59.673062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.054 [2024-06-11 08:22:59.682993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.054 [2024-06-11 08:22:59.683354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.054 [2024-06-11 08:22:59.683371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.054 [2024-06-11 08:22:59.692382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.054 [2024-06-11 08:22:59.692683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.054 [2024-06-11 08:22:59.692699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.316 [2024-06-11 08:22:59.703942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.316 [2024-06-11 08:22:59.704156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.316 [2024-06-11 08:22:59.704171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.316 [2024-06-11 08:22:59.714340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.316 [2024-06-11 08:22:59.714562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.316 [2024-06-11 08:22:59.714578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.316 [2024-06-11 08:22:59.724106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.316 [2024-06-11 08:22:59.724423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.316 [2024-06-11 08:22:59.724444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.316 [2024-06-11 08:22:59.734512] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.316 [2024-06-11 08:22:59.734815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.316 [2024-06-11 08:22:59.734832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.316 [2024-06-11 08:22:59.745259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.316 [2024-06-11 08:22:59.745347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.317 [2024-06-11 08:22:59.745362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.317 [2024-06-11 08:22:59.755829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.317 [2024-06-11 08:22:59.756024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.317 [2024-06-11 08:22:59.756041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.317 [2024-06-11 08:22:59.766036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.317 [2024-06-11 08:22:59.766101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.317 [2024-06-11 08:22:59.766117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.317 [2024-06-11 08:22:59.776894] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.317 [2024-06-11 08:22:59.777161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.317 [2024-06-11 08:22:59.777177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.317 [2024-06-11 08:22:59.787105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.317 [2024-06-11 08:22:59.787430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.317 [2024-06-11 08:22:59.787453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.317 [2024-06-11 08:22:59.797774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.317 [2024-06-11 08:22:59.798017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.317 [2024-06-11 08:22:59.798034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.317 [2024-06-11 08:22:59.808702] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.317 [2024-06-11 08:22:59.809053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.317 [2024-06-11 08:22:59.809070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.317 [2024-06-11 08:22:59.819901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.317 [2024-06-11 08:22:59.820125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.317 [2024-06-11 08:22:59.820141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.317 [2024-06-11 08:22:59.830326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.317 [2024-06-11 08:22:59.830701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.317 [2024-06-11 08:22:59.830717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.317 [2024-06-11 08:22:59.840789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.317 [2024-06-11 08:22:59.841077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.317 [2024-06-11 08:22:59.841093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.317 [2024-06-11 08:22:59.851227] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.317 [2024-06-11 08:22:59.851457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.317 [2024-06-11 08:22:59.851472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.317 [2024-06-11 08:22:59.861419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.317 [2024-06-11 08:22:59.861719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.317 [2024-06-11 08:22:59.861735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.317 [2024-06-11 08:22:59.871799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.317 [2024-06-11 08:22:59.872016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.317 [2024-06-11 08:22:59.872032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.317 [2024-06-11 08:22:59.882166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.317 [2024-06-11 08:22:59.882411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.317 [2024-06-11 08:22:59.882428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.317 [2024-06-11 08:22:59.893205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.317 [2024-06-11 08:22:59.893503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.317 [2024-06-11 08:22:59.893520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.317 [2024-06-11 08:22:59.904666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.317 [2024-06-11 08:22:59.904863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.317 [2024-06-11 08:22:59.904879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.317 [2024-06-11 08:22:59.914499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.317 [2024-06-11 08:22:59.914812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.317 [2024-06-11 08:22:59.914829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.317 [2024-06-11 08:22:59.925721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.317 [2024-06-11 08:22:59.925983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.317 [2024-06-11 08:22:59.926003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.317 [2024-06-11 08:22:59.936307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.317 [2024-06-11 08:22:59.936524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.317 [2024-06-11 08:22:59.936541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.317 [2024-06-11 08:22:59.946869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.317 [2024-06-11 08:22:59.947095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.317 [2024-06-11 08:22:59.947110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.317 [2024-06-11 08:22:59.957581] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.317 [2024-06-11 08:22:59.957846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.317 [2024-06-11 08:22:59.957861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:22:59.968810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:22:59.969134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:22:59.969150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:22:59.979688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:22:59.979965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:22:59.979980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:22:59.990313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:22:59.990550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:22:59.990566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.000433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.000766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.000781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.011373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.011617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.011636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.022836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.023116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.023136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.034463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.034678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.034695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.045600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.045913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.045930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.056501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.056783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.056801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.067638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.067878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.067895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.078256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.078558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.078575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.089086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.089403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.089419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.099019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.099310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.099326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.107868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.108137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.108153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.115791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.115861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.115876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.122101] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.122320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.122335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.130616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.130898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.130915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.137320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.137434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.137454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.141052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.141125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.141140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.147038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.147096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.147111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.154542] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.154680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.154696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.161204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.161334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.161350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.170882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.171112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.171130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.177532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.177607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.177622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.182101] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.182181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.182196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.185519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.580 [2024-06-11 08:23:00.185597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.580 [2024-06-11 08:23:00.185612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.580 [2024-06-11 08:23:00.189002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.581 [2024-06-11 08:23:00.189089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.581 [2024-06-11 08:23:00.189103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.581 [2024-06-11 08:23:00.192359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.581 [2024-06-11 08:23:00.192444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.581 [2024-06-11 08:23:00.192459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.581 [2024-06-11 08:23:00.195802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.581 [2024-06-11 08:23:00.195921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.581 [2024-06-11 08:23:00.195937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.581 [2024-06-11 08:23:00.199534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.581 [2024-06-11 08:23:00.199628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.581 [2024-06-11 08:23:00.199643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.581 [2024-06-11 08:23:00.208268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.581 [2024-06-11 08:23:00.208542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.581 [2024-06-11 08:23:00.208558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.581 [2024-06-11 08:23:00.218189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.581 [2024-06-11 08:23:00.218472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.581 [2024-06-11 08:23:00.218491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.842 [2024-06-11 08:23:00.228615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.842 [2024-06-11 08:23:00.228858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.842 [2024-06-11 08:23:00.228874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.842 [2024-06-11 08:23:00.238933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.842 [2024-06-11 08:23:00.239082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.842 [2024-06-11 08:23:00.239098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.249608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.249923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.249938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.260239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.260586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.260601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.270403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.270462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.270478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.281214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.281466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.281483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.291838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.292119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.292135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.302766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.303024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.303041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.312781] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.312857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.312872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.319078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.319309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.319326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.326319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.326619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.326635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.330385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.330445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.330461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.335532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.335621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.335635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.342211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.342288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.342303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.349345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.349531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.349547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.357171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.357297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.357313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.364255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.364549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.364568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.372282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.372594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.372610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.378752] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.379027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.379044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.385677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.385776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.385791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.392816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.393069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.393084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.399335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.399405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.399420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.404191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.404244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.404259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.409451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.409547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.409562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.416378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.416467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.416483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.422640] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.422726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.422741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.430021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.430113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.430128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.436540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.436621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.436636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.843 [2024-06-11 08:23:00.441432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.843 [2024-06-11 08:23:00.441506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.843 [2024-06-11 08:23:00.441521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.844 [2024-06-11 08:23:00.448697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.844 [2024-06-11 08:23:00.448908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.844 [2024-06-11 08:23:00.448924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.844 [2024-06-11 08:23:00.454397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.844 [2024-06-11 08:23:00.454480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.844 [2024-06-11 08:23:00.454495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:29.844 [2024-06-11 08:23:00.460883] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.844 [2024-06-11 08:23:00.461083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.844 [2024-06-11 08:23:00.461099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.844 [2024-06-11 08:23:00.469031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.844 [2024-06-11 08:23:00.469428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.844 [2024-06-11 08:23:00.469450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:29.844 [2024-06-11 08:23:00.474294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.844 [2024-06-11 08:23:00.474414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.844 [2024-06-11 08:23:00.474429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:29.844 [2024-06-11 08:23:00.481260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:29.844 [2024-06-11 08:23:00.481428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.844 [2024-06-11 08:23:00.481448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.488572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.488644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.488660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.494891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.494953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.494969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.498281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.498391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.498407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.501687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.501758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.501773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.505840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.506132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.506148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.509522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.509616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.509631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.513493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.513789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.513805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.522360] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.522587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.522606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.533532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.533833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.533849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.544348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.544578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.544595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.555478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.555863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.555879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.566260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.566332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.566347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.578202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.578505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.578522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.588361] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.588461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.588476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.597989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.598233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.598250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.605459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.605722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.605738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.613782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.613849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.613866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.623106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.623164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.623179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.633628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.633980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.633996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.644467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.644520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.644535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.656098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.656319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.656335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.668164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.668445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.668461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.678699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.106 [2024-06-11 08:23:00.679002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.106 [2024-06-11 08:23:00.679018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.106 [2024-06-11 08:23:00.688628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.107 [2024-06-11 08:23:00.688890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.107 [2024-06-11 08:23:00.688908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.107 [2024-06-11 08:23:00.699409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234e040) with pdu=0x2000190fef90 00:30:30.107 [2024-06-11 08:23:00.699695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.107 [2024-06-11 08:23:00.699712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.107 00:30:30.107 Latency(us) 00:30:30.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.107 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:30.107 nvme0n1 : 2.01 3776.60 472.07 0.00 0.00 4228.62 1474.56 15837.87 00:30:30.107 =================================================================================================================== 00:30:30.107 Total : 3776.60 472.07 0.00 0.00 4228.62 1474.56 15837.87 00:30:30.107 0 00:30:30.107 08:23:00 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:30.107 08:23:00 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:30.107 08:23:00 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:30.107 | .driver_specific 00:30:30.107 | .nvme_error 00:30:30.107 | .status_code 00:30:30.107 | .command_transient_transport_error' 00:30:30.107 08:23:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:30.368 08:23:00 -- host/digest.sh@71 -- # (( 244 > 0 )) 00:30:30.368 08:23:00 -- host/digest.sh@73 -- # killprocess 1246716 00:30:30.368 08:23:00 -- common/autotest_common.sh@926 -- # '[' -z 1246716 ']' 00:30:30.368 08:23:00 -- common/autotest_common.sh@930 -- # kill -0 1246716 00:30:30.368 08:23:00 -- common/autotest_common.sh@931 -- # uname 00:30:30.368 08:23:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:30.368 08:23:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1246716 00:30:30.368 08:23:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:30.368 08:23:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:30.368 08:23:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1246716' 00:30:30.368 killing process with pid 1246716 00:30:30.368 08:23:00 -- common/autotest_common.sh@945 -- # kill 1246716 00:30:30.368 Received shutdown signal, test time was about 2.000000 seconds 00:30:30.368 00:30:30.368 Latency(us) 00:30:30.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.368 =================================================================================================================== 00:30:30.368 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:30.368 08:23:00 -- common/autotest_common.sh@950 -- # wait 1246716 00:30:30.629 08:23:01 -- host/digest.sh@115 -- # killprocess 1244287 00:30:30.629 08:23:01 -- common/autotest_common.sh@926 -- # '[' -z 1244287 ']' 00:30:30.629 08:23:01 -- common/autotest_common.sh@930 -- # kill -0 1244287 00:30:30.629 08:23:01 -- common/autotest_common.sh@931 -- # uname 00:30:30.629 08:23:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:30.629 08:23:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1244287 00:30:30.629 08:23:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:30.629 08:23:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:30.629 08:23:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1244287' 00:30:30.629 killing process with pid 1244287 00:30:30.629 08:23:01 -- common/autotest_common.sh@945 -- # kill 1244287 00:30:30.629 08:23:01 -- common/autotest_common.sh@950 -- # wait 1244287 00:30:30.629 00:30:30.629 real 0m16.251s 00:30:30.629 user 0m31.718s 00:30:30.629 sys 0m3.490s 00:30:30.629 08:23:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:30.629 08:23:01 -- common/autotest_common.sh@10 -- # set +x 00:30:30.629 ************************************ 00:30:30.629 END TEST nvmf_digest_error 00:30:30.629 ************************************ 00:30:30.890 08:23:01 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:30:30.890 08:23:01 -- host/digest.sh@139 -- # nvmftestfini 00:30:30.890 08:23:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:30.890 08:23:01 -- nvmf/common.sh@116 -- # sync 00:30:30.890 08:23:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:30.890 08:23:01 -- nvmf/common.sh@119 -- # set +e 00:30:30.890 08:23:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:30.890 08:23:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:30.890 rmmod nvme_tcp 00:30:30.890 rmmod nvme_fabrics 00:30:30.890 rmmod nvme_keyring 00:30:30.890 08:23:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:30.890 08:23:01 -- nvmf/common.sh@123 -- # set -e 00:30:30.890 08:23:01 -- nvmf/common.sh@124 -- # return 0 00:30:30.890 08:23:01 -- nvmf/common.sh@477 -- # '[' -n 1244287 ']' 00:30:30.890 08:23:01 -- nvmf/common.sh@478 -- # killprocess 1244287 00:30:30.890 08:23:01 -- common/autotest_common.sh@926 -- # '[' -z 1244287 ']' 00:30:30.890 08:23:01 -- common/autotest_common.sh@930 -- # kill -0 1244287 00:30:30.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1244287) - No such process 00:30:30.890 08:23:01 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1244287 is not found' 00:30:30.890 Process with pid 1244287 is not found 00:30:30.890 08:23:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:30.890 08:23:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:30.890 08:23:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:30.890 08:23:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:30.890 08:23:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:30.890 08:23:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.890 08:23:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:30.890 08:23:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.800 08:23:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:32.800 00:30:32.800 real 0m41.975s 00:30:32.800 user 1m5.803s 00:30:32.800 sys 0m12.131s 00:30:32.800 08:23:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:32.800 08:23:03 -- common/autotest_common.sh@10 -- # set +x 00:30:32.800 ************************************ 00:30:32.800 END TEST nvmf_digest 00:30:32.800 ************************************ 00:30:33.062 08:23:03 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:30:33.062 08:23:03 -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:30:33.062 08:23:03 -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:30:33.062 08:23:03 -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:33.062 08:23:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:33.062 08:23:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:33.062 08:23:03 -- common/autotest_common.sh@10 -- # set +x 00:30:33.062 ************************************ 00:30:33.062 START TEST nvmf_bdevperf 00:30:33.062 ************************************ 00:30:33.062 08:23:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:33.062 * Looking for test storage... 00:30:33.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:33.062 08:23:03 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:33.062 08:23:03 -- nvmf/common.sh@7 -- # uname -s 00:30:33.062 08:23:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:33.062 08:23:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:33.062 08:23:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:33.062 08:23:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:33.062 08:23:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:33.062 08:23:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:33.062 08:23:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:33.062 08:23:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:33.062 08:23:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:33.062 08:23:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:33.062 08:23:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:33.062 08:23:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:33.062 08:23:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:33.062 08:23:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:33.062 08:23:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:33.062 08:23:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:33.062 08:23:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:33.062 08:23:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:33.062 08:23:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:33.062 08:23:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.062 08:23:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.062 08:23:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.062 08:23:03 -- paths/export.sh@5 -- # export PATH 00:30:33.062 08:23:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.062 08:23:03 -- nvmf/common.sh@46 -- # : 0 00:30:33.062 08:23:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:33.062 08:23:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:33.062 08:23:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:33.062 08:23:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:33.062 08:23:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:33.062 08:23:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:33.062 08:23:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:33.062 08:23:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:33.062 08:23:03 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:33.062 08:23:03 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:33.062 08:23:03 -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:33.062 08:23:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:33.062 08:23:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:33.062 08:23:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:33.062 08:23:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:33.062 08:23:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:33.062 08:23:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.062 08:23:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:33.062 08:23:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.062 08:23:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:33.062 08:23:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:33.062 08:23:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:33.062 08:23:03 -- common/autotest_common.sh@10 -- # set +x 00:30:41.208 08:23:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:41.208 08:23:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:41.208 08:23:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:41.208 08:23:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:41.208 08:23:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:41.208 08:23:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:41.208 08:23:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:41.208 08:23:10 -- nvmf/common.sh@294 -- # net_devs=() 00:30:41.208 08:23:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:41.208 08:23:10 -- nvmf/common.sh@295 -- # e810=() 00:30:41.208 08:23:10 -- nvmf/common.sh@295 -- # local -ga e810 00:30:41.208 08:23:10 -- nvmf/common.sh@296 -- # x722=() 00:30:41.209 08:23:10 -- nvmf/common.sh@296 -- # local -ga x722 00:30:41.209 08:23:10 -- nvmf/common.sh@297 -- # mlx=() 00:30:41.209 08:23:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:41.209 08:23:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.209 08:23:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.209 08:23:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.209 08:23:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.209 08:23:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.209 08:23:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.209 08:23:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.209 08:23:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.209 08:23:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.209 08:23:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.209 08:23:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.209 08:23:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:41.209 08:23:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:41.209 08:23:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:41.209 08:23:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:41.209 08:23:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:41.209 08:23:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:41.209 08:23:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:41.209 08:23:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:41.209 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:41.209 08:23:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:41.209 08:23:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:41.209 08:23:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.209 08:23:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.209 08:23:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:41.209 08:23:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:41.209 08:23:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:41.209 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:41.209 08:23:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:41.209 08:23:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:41.209 08:23:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.209 08:23:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.209 08:23:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:41.209 08:23:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:41.209 08:23:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:41.209 08:23:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:41.209 08:23:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:41.209 08:23:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.209 08:23:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:41.209 08:23:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.209 08:23:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:41.209 Found net devices under 0000:31:00.0: cvl_0_0 00:30:41.209 08:23:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.209 08:23:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:41.209 08:23:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.209 08:23:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:41.209 08:23:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.209 08:23:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:41.209 Found net devices under 0000:31:00.1: cvl_0_1 00:30:41.209 08:23:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.209 08:23:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:41.209 08:23:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:41.209 08:23:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:41.209 08:23:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:41.209 08:23:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:41.209 08:23:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.209 08:23:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.209 08:23:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.209 08:23:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:41.209 08:23:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.209 08:23:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.209 08:23:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:41.209 08:23:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.209 08:23:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.209 08:23:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:41.209 08:23:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:41.209 08:23:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.209 08:23:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.209 08:23:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.209 08:23:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.209 08:23:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:41.209 08:23:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:41.209 08:23:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:41.209 08:23:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:41.209 08:23:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:41.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:41.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:30:41.209 00:30:41.209 --- 10.0.0.2 ping statistics --- 00:30:41.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.209 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:30:41.209 08:23:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:41.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:41.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:30:41.209 00:30:41.209 --- 10.0.0.1 ping statistics --- 00:30:41.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.209 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:30:41.209 08:23:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:41.209 08:23:10 -- nvmf/common.sh@410 -- # return 0 00:30:41.209 08:23:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:41.209 08:23:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:41.209 08:23:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:41.209 08:23:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:41.209 08:23:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:41.209 08:23:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:41.209 08:23:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:41.209 08:23:10 -- host/bdevperf.sh@25 -- # tgt_init 00:30:41.209 08:23:10 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:41.209 08:23:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:41.209 08:23:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:41.209 08:23:10 -- common/autotest_common.sh@10 -- # set +x 00:30:41.209 08:23:10 -- nvmf/common.sh@469 -- # nvmfpid=1251814 00:30:41.209 08:23:10 -- nvmf/common.sh@470 -- # waitforlisten 1251814 00:30:41.209 08:23:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:41.209 08:23:10 -- common/autotest_common.sh@819 -- # '[' -z 1251814 ']' 00:30:41.209 08:23:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.209 08:23:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:41.209 08:23:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.209 08:23:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:41.209 08:23:10 -- common/autotest_common.sh@10 -- # set +x 00:30:41.209 [2024-06-11 08:23:10.805720] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:41.209 [2024-06-11 08:23:10.805796] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:41.209 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.209 [2024-06-11 08:23:10.890941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:41.209 [2024-06-11 08:23:10.953865] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:41.209 [2024-06-11 08:23:10.953995] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:41.209 [2024-06-11 08:23:10.954004] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:41.209 [2024-06-11 08:23:10.954014] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:41.209 [2024-06-11 08:23:10.954116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:41.209 [2024-06-11 08:23:10.954269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.209 [2024-06-11 08:23:10.954270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:41.209 08:23:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:41.209 08:23:11 -- common/autotest_common.sh@852 -- # return 0 00:30:41.209 08:23:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:41.209 08:23:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:41.209 08:23:11 -- common/autotest_common.sh@10 -- # set +x 00:30:41.209 08:23:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:41.209 08:23:11 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:41.209 08:23:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:41.209 08:23:11 -- common/autotest_common.sh@10 -- # set +x 00:30:41.209 [2024-06-11 08:23:11.649768] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:41.209 08:23:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:41.209 08:23:11 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:41.209 08:23:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:41.209 08:23:11 -- common/autotest_common.sh@10 -- # set +x 00:30:41.209 Malloc0 00:30:41.209 08:23:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:41.209 08:23:11 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:41.210 08:23:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:41.210 08:23:11 -- common/autotest_common.sh@10 -- # set +x 00:30:41.210 08:23:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:41.210 08:23:11 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:41.210 08:23:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:41.210 08:23:11 -- common/autotest_common.sh@10 -- # set +x 00:30:41.210 08:23:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:41.210 08:23:11 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:41.210 08:23:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:41.210 08:23:11 -- common/autotest_common.sh@10 -- # set +x 00:30:41.210 [2024-06-11 08:23:11.724426] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.210 08:23:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:41.210 08:23:11 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:41.210 08:23:11 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:41.210 08:23:11 -- nvmf/common.sh@520 -- # config=() 00:30:41.210 08:23:11 -- nvmf/common.sh@520 -- # local subsystem config 00:30:41.210 08:23:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:41.210 08:23:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:41.210 { 00:30:41.210 "params": { 00:30:41.210 "name": "Nvme$subsystem", 00:30:41.210 "trtype": "$TEST_TRANSPORT", 00:30:41.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:41.210 "adrfam": "ipv4", 00:30:41.210 "trsvcid": "$NVMF_PORT", 00:30:41.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:41.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:41.210 "hdgst": ${hdgst:-false}, 00:30:41.210 "ddgst": ${ddgst:-false} 00:30:41.210 }, 00:30:41.210 "method": "bdev_nvme_attach_controller" 00:30:41.210 } 00:30:41.210 EOF 00:30:41.210 )") 00:30:41.210 08:23:11 -- nvmf/common.sh@542 -- # cat 00:30:41.210 08:23:11 -- nvmf/common.sh@544 -- # jq . 00:30:41.210 08:23:11 -- nvmf/common.sh@545 -- # IFS=, 00:30:41.210 08:23:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:41.210 "params": { 00:30:41.210 "name": "Nvme1", 00:30:41.210 "trtype": "tcp", 00:30:41.210 "traddr": "10.0.0.2", 00:30:41.210 "adrfam": "ipv4", 00:30:41.210 "trsvcid": "4420", 00:30:41.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:41.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:41.210 "hdgst": false, 00:30:41.210 "ddgst": false 00:30:41.210 }, 00:30:41.210 "method": "bdev_nvme_attach_controller" 00:30:41.210 }' 00:30:41.210 [2024-06-11 08:23:11.775390] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:41.210 [2024-06-11 08:23:11.775446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1251877 ] 00:30:41.210 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.210 [2024-06-11 08:23:11.834724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.471 [2024-06-11 08:23:11.897606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.731 Running I/O for 1 seconds... 00:30:42.674 00:30:42.674 Latency(us) 00:30:42.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.674 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:42.674 Verification LBA range: start 0x0 length 0x4000 00:30:42.674 Nvme1n1 : 1.01 13952.42 54.50 0.00 0.00 9131.57 976.21 14854.83 00:30:42.674 =================================================================================================================== 00:30:42.674 Total : 13952.42 54.50 0.00 0.00 9131.57 976.21 14854.83 00:30:42.674 08:23:13 -- host/bdevperf.sh@30 -- # bdevperfpid=1252186 00:30:42.674 08:23:13 -- host/bdevperf.sh@32 -- # sleep 3 00:30:42.674 08:23:13 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:42.674 08:23:13 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:42.674 08:23:13 -- nvmf/common.sh@520 -- # config=() 00:30:42.674 08:23:13 -- nvmf/common.sh@520 -- # local subsystem config 00:30:42.674 08:23:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:42.674 08:23:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:42.674 { 00:30:42.674 "params": { 00:30:42.674 "name": "Nvme$subsystem", 00:30:42.674 "trtype": "$TEST_TRANSPORT", 00:30:42.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.674 "adrfam": "ipv4", 00:30:42.674 "trsvcid": "$NVMF_PORT", 00:30:42.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.674 "hdgst": ${hdgst:-false}, 00:30:42.674 "ddgst": ${ddgst:-false} 00:30:42.674 }, 00:30:42.674 "method": "bdev_nvme_attach_controller" 00:30:42.674 } 00:30:42.674 EOF 00:30:42.674 )") 00:30:42.674 08:23:13 -- nvmf/common.sh@542 -- # cat 00:30:42.674 08:23:13 -- nvmf/common.sh@544 -- # jq . 00:30:42.674 08:23:13 -- nvmf/common.sh@545 -- # IFS=, 00:30:42.674 08:23:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:42.674 "params": { 00:30:42.674 "name": "Nvme1", 00:30:42.674 "trtype": "tcp", 00:30:42.674 "traddr": "10.0.0.2", 00:30:42.674 "adrfam": "ipv4", 00:30:42.674 "trsvcid": "4420", 00:30:42.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:42.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:42.674 "hdgst": false, 00:30:42.674 "ddgst": false 00:30:42.674 }, 00:30:42.674 "method": "bdev_nvme_attach_controller" 00:30:42.674 }' 00:30:42.936 [2024-06-11 08:23:13.342178] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:42.936 [2024-06-11 08:23:13.342234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1252186 ] 00:30:42.936 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.936 [2024-06-11 08:23:13.402252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.936 [2024-06-11 08:23:13.464303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.197 Running I/O for 15 seconds... 00:30:45.746 08:23:16 -- host/bdevperf.sh@33 -- # kill -9 1251814 00:30:45.746 08:23:16 -- host/bdevperf.sh@35 -- # sleep 3 00:30:45.747 [2024-06-11 08:23:16.318275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:104704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:104712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:104752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.318984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.318993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.319002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.319017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.319026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.319035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.319044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.319054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.319062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.319072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.319081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.319092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.747 [2024-06-11 08:23:16.319100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.747 [2024-06-11 08:23:16.319109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.747 [2024-06-11 08:23:16.319117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:105000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.748 [2024-06-11 08:23:16.319618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.748 [2024-06-11 08:23:16.319643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.748 [2024-06-11 08:23:16.319697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.748 [2024-06-11 08:23:16.319713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.748 [2024-06-11 08:23:16.319732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.748 [2024-06-11 08:23:16.319783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.748 [2024-06-11 08:23:16.319816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.748 [2024-06-11 08:23:16.319851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.748 [2024-06-11 08:23:16.319867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.748 [2024-06-11 08:23:16.319917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.748 [2024-06-11 08:23:16.319926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.749 [2024-06-11 08:23:16.319934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.319945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.749 [2024-06-11 08:23:16.319952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.319962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.749 [2024-06-11 08:23:16.319968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.319977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.749 [2024-06-11 08:23:16.319985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.319995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.749 [2024-06-11 08:23:16.320002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.749 [2024-06-11 08:23:16.320018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.749 [2024-06-11 08:23:16.320036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.749 [2024-06-11 08:23:16.320053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.749 [2024-06-11 08:23:16.320069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.749 [2024-06-11 08:23:16.320085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.749 [2024-06-11 08:23:16.320102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.749 [2024-06-11 08:23:16.320118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.749 [2024-06-11 08:23:16.320134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.749 [2024-06-11 08:23:16.320153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.749 [2024-06-11 08:23:16.320169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.749 [2024-06-11 08:23:16.320186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:105232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.749 [2024-06-11 08:23:16.320203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.749 [2024-06-11 08:23:16.320219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:105248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.749 [2024-06-11 08:23:16.320236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.749 [2024-06-11 08:23:16.320253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.749 [2024-06-11 08:23:16.320269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.749 [2024-06-11 08:23:16.320285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.749 [2024-06-11 08:23:16.320303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.749 [2024-06-11 08:23:16.320319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.749 [2024-06-11 08:23:16.320335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.749 [2024-06-11 08:23:16.320353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.749 [2024-06-11 08:23:16.320375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.749 [2024-06-11 08:23:16.320391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:104808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.749 [2024-06-11 08:23:16.320408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.749 [2024-06-11 08:23:16.320424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.749 [2024-06-11 08:23:16.320444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.749 [2024-06-11 08:23:16.320461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.749 [2024-06-11 08:23:16.320477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:105312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.749 [2024-06-11 08:23:16.320494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.749 [2024-06-11 08:23:16.320510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.749 [2024-06-11 08:23:16.320526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.749 [2024-06-11 08:23:16.320542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.749 [2024-06-11 08:23:16.320559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.749 [2024-06-11 08:23:16.320575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.749 [2024-06-11 08:23:16.320586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:105360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.749 [2024-06-11 08:23:16.320593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.750 [2024-06-11 08:23:16.320605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.750 [2024-06-11 08:23:16.320613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.750 [2024-06-11 08:23:16.320622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.750 [2024-06-11 08:23:16.320629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.750 [2024-06-11 08:23:16.320638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:105384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:45.750 [2024-06-11 08:23:16.320646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.750 [2024-06-11 08:23:16.320656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.750 [2024-06-11 08:23:16.320663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.750 [2024-06-11 08:23:16.320672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.750 [2024-06-11 08:23:16.320680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.750 [2024-06-11 08:23:16.320689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.750 [2024-06-11 08:23:16.320697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.750 [2024-06-11 08:23:16.320706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.750 [2024-06-11 08:23:16.320713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.750 [2024-06-11 08:23:16.320722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.750 [2024-06-11 08:23:16.320729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.750 [2024-06-11 08:23:16.320738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.750 [2024-06-11 08:23:16.320747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.750 [2024-06-11 08:23:16.320756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:45.750 [2024-06-11 08:23:16.320764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.750 [2024-06-11 08:23:16.320772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9810 is same with the state(5) to be set 00:30:45.750 [2024-06-11 08:23:16.320782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:45.750 [2024-06-11 08:23:16.320787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:45.750 [2024-06-11 08:23:16.320796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104944 len:8 PRP1 0x0 PRP2 0x0 00:30:45.750 [2024-06-11 08:23:16.320804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:45.750 [2024-06-11 08:23:16.320840] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10d9810 was disconnected and freed. reset controller. 00:30:45.750 [2024-06-11 08:23:16.323040] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.750 [2024-06-11 08:23:16.323085] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:45.750 [2024-06-11 08:23:16.323673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-11 08:23:16.324009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-11 08:23:16.324022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:45.750 [2024-06-11 08:23:16.324032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:45.750 [2024-06-11 08:23:16.324216] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:45.750 [2024-06-11 08:23:16.324365] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.750 [2024-06-11 08:23:16.324374] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.750 [2024-06-11 08:23:16.324382] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.750 [2024-06-11 08:23:16.326556] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.750 [2024-06-11 08:23:16.335823] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.750 [2024-06-11 08:23:16.336249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-11 08:23:16.336678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-11 08:23:16.336716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:45.750 [2024-06-11 08:23:16.336727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:45.750 [2024-06-11 08:23:16.336908] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:45.750 [2024-06-11 08:23:16.337037] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.750 [2024-06-11 08:23:16.337047] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.750 [2024-06-11 08:23:16.337055] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.750 [2024-06-11 08:23:16.339280] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.750 [2024-06-11 08:23:16.348361] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.750 [2024-06-11 08:23:16.348917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-11 08:23:16.349287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-11 08:23:16.349301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:45.750 [2024-06-11 08:23:16.349310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:45.750 [2024-06-11 08:23:16.349398] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:45.750 [2024-06-11 08:23:16.349555] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.750 [2024-06-11 08:23:16.349569] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.750 [2024-06-11 08:23:16.349577] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.750 [2024-06-11 08:23:16.351907] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.750 [2024-06-11 08:23:16.360831] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.750 [2024-06-11 08:23:16.361405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-11 08:23:16.361765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-11 08:23:16.361781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:45.750 [2024-06-11 08:23:16.361791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:45.750 [2024-06-11 08:23:16.361953] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:45.750 [2024-06-11 08:23:16.362082] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.750 [2024-06-11 08:23:16.362091] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.750 [2024-06-11 08:23:16.362099] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.750 [2024-06-11 08:23:16.364493] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.750 [2024-06-11 08:23:16.373461] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.750 [2024-06-11 08:23:16.373974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-11 08:23:16.374304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-11 08:23:16.374315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:45.750 [2024-06-11 08:23:16.374323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:45.750 [2024-06-11 08:23:16.374454] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:45.750 [2024-06-11 08:23:16.374598] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.750 [2024-06-11 08:23:16.374607] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.750 [2024-06-11 08:23:16.374614] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.750 [2024-06-11 08:23:16.376755] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:45.750 [2024-06-11 08:23:16.385937] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:45.750 [2024-06-11 08:23:16.386404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-11 08:23:16.386719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:45.750 [2024-06-11 08:23:16.386730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:45.750 [2024-06-11 08:23:16.386738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:45.750 [2024-06-11 08:23:16.386899] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:45.750 [2024-06-11 08:23:16.387078] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:45.750 [2024-06-11 08:23:16.387087] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:45.750 [2024-06-11 08:23:16.387099] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:45.750 [2024-06-11 08:23:16.389448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.016 [2024-06-11 08:23:16.398486] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.016 [2024-06-11 08:23:16.398979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.016 [2024-06-11 08:23:16.399279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.016 [2024-06-11 08:23:16.399290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.016 [2024-06-11 08:23:16.399297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.016 [2024-06-11 08:23:16.399489] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.016 [2024-06-11 08:23:16.399653] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.016 [2024-06-11 08:23:16.399662] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.016 [2024-06-11 08:23:16.399670] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.016 [2024-06-11 08:23:16.401918] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.016 [2024-06-11 08:23:16.410947] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.016 [2024-06-11 08:23:16.411539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.016 [2024-06-11 08:23:16.411926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.016 [2024-06-11 08:23:16.411940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.016 [2024-06-11 08:23:16.411949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.016 [2024-06-11 08:23:16.412129] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.016 [2024-06-11 08:23:16.412312] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.016 [2024-06-11 08:23:16.412321] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.016 [2024-06-11 08:23:16.412328] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.016 [2024-06-11 08:23:16.414592] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.016 [2024-06-11 08:23:16.423292] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.016 [2024-06-11 08:23:16.423800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.016 [2024-06-11 08:23:16.424174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.016 [2024-06-11 08:23:16.424188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.016 [2024-06-11 08:23:16.424197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.016 [2024-06-11 08:23:16.424359] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.016 [2024-06-11 08:23:16.424515] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.016 [2024-06-11 08:23:16.424525] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.016 [2024-06-11 08:23:16.424532] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.016 [2024-06-11 08:23:16.426760] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.016 [2024-06-11 08:23:16.435768] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.016 [2024-06-11 08:23:16.436417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.016 [2024-06-11 08:23:16.436814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.016 [2024-06-11 08:23:16.436829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.016 [2024-06-11 08:23:16.436838] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.016 [2024-06-11 08:23:16.437019] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.016 [2024-06-11 08:23:16.437204] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.016 [2024-06-11 08:23:16.437213] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.016 [2024-06-11 08:23:16.437221] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.016 [2024-06-11 08:23:16.439610] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.016 [2024-06-11 08:23:16.448276] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.016 [2024-06-11 08:23:16.448858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.016 [2024-06-11 08:23:16.449242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.016 [2024-06-11 08:23:16.449257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.016 [2024-06-11 08:23:16.449266] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.017 [2024-06-11 08:23:16.449428] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.017 [2024-06-11 08:23:16.449529] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.017 [2024-06-11 08:23:16.449539] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.017 [2024-06-11 08:23:16.449546] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.017 [2024-06-11 08:23:16.451913] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.017 [2024-06-11 08:23:16.461068] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.017 [2024-06-11 08:23:16.461643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.017 [2024-06-11 08:23:16.461980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.017 [2024-06-11 08:23:16.461994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.017 [2024-06-11 08:23:16.462003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.017 [2024-06-11 08:23:16.462165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.017 [2024-06-11 08:23:16.462350] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.017 [2024-06-11 08:23:16.462360] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.017 [2024-06-11 08:23:16.462367] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.017 [2024-06-11 08:23:16.464743] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.017 [2024-06-11 08:23:16.473604] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.017 [2024-06-11 08:23:16.474157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.017 [2024-06-11 08:23:16.474571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.017 [2024-06-11 08:23:16.474586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.017 [2024-06-11 08:23:16.474596] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.017 [2024-06-11 08:23:16.474740] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.017 [2024-06-11 08:23:16.474850] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.017 [2024-06-11 08:23:16.474859] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.017 [2024-06-11 08:23:16.474866] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.017 [2024-06-11 08:23:16.477123] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.017 [2024-06-11 08:23:16.486117] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.017 [2024-06-11 08:23:16.486682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.017 [2024-06-11 08:23:16.487058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.017 [2024-06-11 08:23:16.487071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.017 [2024-06-11 08:23:16.487081] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.017 [2024-06-11 08:23:16.487280] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.017 [2024-06-11 08:23:16.487491] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.017 [2024-06-11 08:23:16.487503] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.017 [2024-06-11 08:23:16.487510] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.017 [2024-06-11 08:23:16.489728] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.017 [2024-06-11 08:23:16.498737] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.017 [2024-06-11 08:23:16.499336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.017 [2024-06-11 08:23:16.499675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.017 [2024-06-11 08:23:16.499690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.017 [2024-06-11 08:23:16.499700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.017 [2024-06-11 08:23:16.499880] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.017 [2024-06-11 08:23:16.500046] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.017 [2024-06-11 08:23:16.500055] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.017 [2024-06-11 08:23:16.500063] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.017 [2024-06-11 08:23:16.502245] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.017 [2024-06-11 08:23:16.511190] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.017 [2024-06-11 08:23:16.511809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.017 [2024-06-11 08:23:16.512190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.017 [2024-06-11 08:23:16.512204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.017 [2024-06-11 08:23:16.512213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.017 [2024-06-11 08:23:16.512357] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.017 [2024-06-11 08:23:16.512534] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.017 [2024-06-11 08:23:16.512544] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.017 [2024-06-11 08:23:16.512552] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.017 [2024-06-11 08:23:16.514915] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.017 [2024-06-11 08:23:16.523719] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.017 [2024-06-11 08:23:16.524241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.017 [2024-06-11 08:23:16.524618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.017 [2024-06-11 08:23:16.524634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.017 [2024-06-11 08:23:16.524644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.017 [2024-06-11 08:23:16.524806] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.017 [2024-06-11 08:23:16.524916] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.017 [2024-06-11 08:23:16.524925] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.017 [2024-06-11 08:23:16.524933] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.017 [2024-06-11 08:23:16.527210] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.017 [2024-06-11 08:23:16.535937] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.017 [2024-06-11 08:23:16.536524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.017 [2024-06-11 08:23:16.536866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.017 [2024-06-11 08:23:16.536880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.017 [2024-06-11 08:23:16.536890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.017 [2024-06-11 08:23:16.537071] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.017 [2024-06-11 08:23:16.537237] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.017 [2024-06-11 08:23:16.537246] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.017 [2024-06-11 08:23:16.537254] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.017 [2024-06-11 08:23:16.539515] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.017 [2024-06-11 08:23:16.548462] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.017 [2024-06-11 08:23:16.549071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.017 [2024-06-11 08:23:16.549408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.017 [2024-06-11 08:23:16.549426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.017 [2024-06-11 08:23:16.549436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.017 [2024-06-11 08:23:16.549607] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.017 [2024-06-11 08:23:16.549736] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.017 [2024-06-11 08:23:16.549745] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.017 [2024-06-11 08:23:16.549752] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.017 [2024-06-11 08:23:16.552099] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.017 [2024-06-11 08:23:16.561023] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.017 [2024-06-11 08:23:16.561584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.018 [2024-06-11 08:23:16.561929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.018 [2024-06-11 08:23:16.561944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.018 [2024-06-11 08:23:16.561954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.018 [2024-06-11 08:23:16.562154] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.018 [2024-06-11 08:23:16.562264] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.018 [2024-06-11 08:23:16.562274] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.018 [2024-06-11 08:23:16.562281] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.018 [2024-06-11 08:23:16.564307] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.018 [2024-06-11 08:23:16.573757] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.018 [2024-06-11 08:23:16.574354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.018 [2024-06-11 08:23:16.574696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.018 [2024-06-11 08:23:16.574712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.018 [2024-06-11 08:23:16.574722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.018 [2024-06-11 08:23:16.574865] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.018 [2024-06-11 08:23:16.574993] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.018 [2024-06-11 08:23:16.575002] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.018 [2024-06-11 08:23:16.575010] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.018 [2024-06-11 08:23:16.577248] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.018 [2024-06-11 08:23:16.586175] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.018 [2024-06-11 08:23:16.586620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.018 [2024-06-11 08:23:16.586799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.018 [2024-06-11 08:23:16.586812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.018 [2024-06-11 08:23:16.586825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.018 [2024-06-11 08:23:16.587006] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.018 [2024-06-11 08:23:16.587152] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.018 [2024-06-11 08:23:16.587162] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.018 [2024-06-11 08:23:16.587169] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.018 [2024-06-11 08:23:16.589114] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.018 [2024-06-11 08:23:16.598699] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.018 [2024-06-11 08:23:16.599210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.018 [2024-06-11 08:23:16.599462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.018 [2024-06-11 08:23:16.599474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.018 [2024-06-11 08:23:16.599484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.018 [2024-06-11 08:23:16.599647] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.018 [2024-06-11 08:23:16.599828] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.018 [2024-06-11 08:23:16.599836] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.018 [2024-06-11 08:23:16.599844] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.018 [2024-06-11 08:23:16.602094] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.018 [2024-06-11 08:23:16.611044] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.018 [2024-06-11 08:23:16.611533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.018 [2024-06-11 08:23:16.611880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.018 [2024-06-11 08:23:16.611891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.018 [2024-06-11 08:23:16.611899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.018 [2024-06-11 08:23:16.612062] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.018 [2024-06-11 08:23:16.612206] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.018 [2024-06-11 08:23:16.612215] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.018 [2024-06-11 08:23:16.612223] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.018 [2024-06-11 08:23:16.614286] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.018 [2024-06-11 08:23:16.623618] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.018 [2024-06-11 08:23:16.624107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.018 [2024-06-11 08:23:16.624426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.018 [2024-06-11 08:23:16.624436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.018 [2024-06-11 08:23:16.624450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.018 [2024-06-11 08:23:16.624616] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.018 [2024-06-11 08:23:16.624722] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.018 [2024-06-11 08:23:16.624731] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.018 [2024-06-11 08:23:16.624739] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.018 [2024-06-11 08:23:16.627006] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.018 [2024-06-11 08:23:16.636211] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.018 [2024-06-11 08:23:16.636828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.018 [2024-06-11 08:23:16.637170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.018 [2024-06-11 08:23:16.637184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.018 [2024-06-11 08:23:16.637193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.018 [2024-06-11 08:23:16.637318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.018 [2024-06-11 08:23:16.637491] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.018 [2024-06-11 08:23:16.637501] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.018 [2024-06-11 08:23:16.637509] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.018 [2024-06-11 08:23:16.639816] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.018 [2024-06-11 08:23:16.648636] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.018 [2024-06-11 08:23:16.649187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.018 [2024-06-11 08:23:16.649453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.018 [2024-06-11 08:23:16.649468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.018 [2024-06-11 08:23:16.649478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.019 [2024-06-11 08:23:16.649641] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.019 [2024-06-11 08:23:16.649824] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.019 [2024-06-11 08:23:16.649834] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.019 [2024-06-11 08:23:16.649841] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.019 [2024-06-11 08:23:16.652212] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.282 [2024-06-11 08:23:16.661008] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.282 [2024-06-11 08:23:16.661444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.282 [2024-06-11 08:23:16.661782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.283 [2024-06-11 08:23:16.661793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.283 [2024-06-11 08:23:16.661801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.283 [2024-06-11 08:23:16.661982] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.283 [2024-06-11 08:23:16.662149] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.283 [2024-06-11 08:23:16.662158] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.283 [2024-06-11 08:23:16.662165] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.283 [2024-06-11 08:23:16.664416] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.283 [2024-06-11 08:23:16.673578] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.283 [2024-06-11 08:23:16.674078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.283 [2024-06-11 08:23:16.674290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.283 [2024-06-11 08:23:16.674300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.283 [2024-06-11 08:23:16.674308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.283 [2024-06-11 08:23:16.674431] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.283 [2024-06-11 08:23:16.674580] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.283 [2024-06-11 08:23:16.674590] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.283 [2024-06-11 08:23:16.674597] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.283 [2024-06-11 08:23:16.676917] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.283 [2024-06-11 08:23:16.685894] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.283 [2024-06-11 08:23:16.686539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.283 [2024-06-11 08:23:16.686767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.283 [2024-06-11 08:23:16.686782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.283 [2024-06-11 08:23:16.686791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.283 [2024-06-11 08:23:16.686953] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.283 [2024-06-11 08:23:16.687062] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.283 [2024-06-11 08:23:16.687072] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.283 [2024-06-11 08:23:16.687079] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.283 [2024-06-11 08:23:16.689214] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.283 [2024-06-11 08:23:16.698257] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.283 [2024-06-11 08:23:16.698899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.283 [2024-06-11 08:23:16.699159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.283 [2024-06-11 08:23:16.699174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.283 [2024-06-11 08:23:16.699183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.283 [2024-06-11 08:23:16.699346] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.283 [2024-06-11 08:23:16.699538] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.283 [2024-06-11 08:23:16.699552] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.283 [2024-06-11 08:23:16.699560] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.283 [2024-06-11 08:23:16.701703] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.283 [2024-06-11 08:23:16.710910] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.283 [2024-06-11 08:23:16.711487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.283 [2024-06-11 08:23:16.711731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.283 [2024-06-11 08:23:16.711747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.283 [2024-06-11 08:23:16.711757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.283 [2024-06-11 08:23:16.711957] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.283 [2024-06-11 08:23:16.712104] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.283 [2024-06-11 08:23:16.712112] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.283 [2024-06-11 08:23:16.712120] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.283 [2024-06-11 08:23:16.714379] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.283 [2024-06-11 08:23:16.723204] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.283 [2024-06-11 08:23:16.723653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.283 [2024-06-11 08:23:16.723980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.283 [2024-06-11 08:23:16.723991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.283 [2024-06-11 08:23:16.723999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.283 [2024-06-11 08:23:16.724143] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.283 [2024-06-11 08:23:16.724325] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.283 [2024-06-11 08:23:16.724334] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.283 [2024-06-11 08:23:16.724341] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.283 [2024-06-11 08:23:16.726502] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.283 [2024-06-11 08:23:16.735633] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.283 [2024-06-11 08:23:16.736104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.283 [2024-06-11 08:23:16.736448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.283 [2024-06-11 08:23:16.736460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.283 [2024-06-11 08:23:16.736467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.283 [2024-06-11 08:23:16.736647] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.283 [2024-06-11 08:23:16.736773] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.283 [2024-06-11 08:23:16.736782] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.283 [2024-06-11 08:23:16.736797] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.283 [2024-06-11 08:23:16.739211] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.283 [2024-06-11 08:23:16.748155] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.283 [2024-06-11 08:23:16.748808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.283 [2024-06-11 08:23:16.749026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.283 [2024-06-11 08:23:16.749041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.283 [2024-06-11 08:23:16.749051] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.283 [2024-06-11 08:23:16.749250] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.283 [2024-06-11 08:23:16.749461] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.283 [2024-06-11 08:23:16.749471] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.283 [2024-06-11 08:23:16.749479] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.283 [2024-06-11 08:23:16.751807] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.283 [2024-06-11 08:23:16.760842] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.283 [2024-06-11 08:23:16.761429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.283 [2024-06-11 08:23:16.761703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.283 [2024-06-11 08:23:16.761717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.283 [2024-06-11 08:23:16.761727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.283 [2024-06-11 08:23:16.761926] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.283 [2024-06-11 08:23:16.762073] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.283 [2024-06-11 08:23:16.762082] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.283 [2024-06-11 08:23:16.762089] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.283 [2024-06-11 08:23:16.764441] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.283 [2024-06-11 08:23:16.773452] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.283 [2024-06-11 08:23:16.773989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.283 [2024-06-11 08:23:16.774371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.283 [2024-06-11 08:23:16.774385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.283 [2024-06-11 08:23:16.774395] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.283 [2024-06-11 08:23:16.774585] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.283 [2024-06-11 08:23:16.774714] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.283 [2024-06-11 08:23:16.774724] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.283 [2024-06-11 08:23:16.774731] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.283 [2024-06-11 08:23:16.777154] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.283 [2024-06-11 08:23:16.785941] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.283 [2024-06-11 08:23:16.786538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.283 [2024-06-11 08:23:16.786868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.283 [2024-06-11 08:23:16.786882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.283 [2024-06-11 08:23:16.786891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.283 [2024-06-11 08:23:16.787091] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.283 [2024-06-11 08:23:16.787219] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.283 [2024-06-11 08:23:16.787229] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.283 [2024-06-11 08:23:16.787236] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.283 [2024-06-11 08:23:16.789480] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.283 [2024-06-11 08:23:16.798659] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.284 [2024-06-11 08:23:16.799254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.284 [2024-06-11 08:23:16.799603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.284 [2024-06-11 08:23:16.799619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.284 [2024-06-11 08:23:16.799628] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.284 [2024-06-11 08:23:16.799790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.284 [2024-06-11 08:23:16.799993] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.284 [2024-06-11 08:23:16.800003] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.284 [2024-06-11 08:23:16.800011] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.284 [2024-06-11 08:23:16.802413] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.284 [2024-06-11 08:23:16.811226] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.284 [2024-06-11 08:23:16.811757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.284 [2024-06-11 08:23:16.812133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.284 [2024-06-11 08:23:16.812147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.284 [2024-06-11 08:23:16.812156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.284 [2024-06-11 08:23:16.812318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.284 [2024-06-11 08:23:16.812490] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.284 [2024-06-11 08:23:16.812499] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.284 [2024-06-11 08:23:16.812508] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.284 [2024-06-11 08:23:16.814836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.284 [2024-06-11 08:23:16.823714] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.284 [2024-06-11 08:23:16.824321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.284 [2024-06-11 08:23:16.824645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.284 [2024-06-11 08:23:16.824662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.284 [2024-06-11 08:23:16.824672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.284 [2024-06-11 08:23:16.824797] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.284 [2024-06-11 08:23:16.824963] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.284 [2024-06-11 08:23:16.824972] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.284 [2024-06-11 08:23:16.824980] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.284 [2024-06-11 08:23:16.827347] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.284 [2024-06-11 08:23:16.836394] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.284 [2024-06-11 08:23:16.836989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.284 [2024-06-11 08:23:16.837366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.284 [2024-06-11 08:23:16.837379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.284 [2024-06-11 08:23:16.837388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.284 [2024-06-11 08:23:16.837520] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.284 [2024-06-11 08:23:16.837630] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.284 [2024-06-11 08:23:16.837639] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.284 [2024-06-11 08:23:16.837647] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.284 [2024-06-11 08:23:16.839973] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.284 [2024-06-11 08:23:16.848872] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.284 [2024-06-11 08:23:16.849411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.284 [2024-06-11 08:23:16.849747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.284 [2024-06-11 08:23:16.849758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.284 [2024-06-11 08:23:16.849766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.284 [2024-06-11 08:23:16.849947] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.284 [2024-06-11 08:23:16.850072] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.284 [2024-06-11 08:23:16.850081] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.284 [2024-06-11 08:23:16.850089] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.284 [2024-06-11 08:23:16.852356] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.284 [2024-06-11 08:23:16.861186] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.284 [2024-06-11 08:23:16.861794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.284 [2024-06-11 08:23:16.862168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.284 [2024-06-11 08:23:16.862182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.284 [2024-06-11 08:23:16.862191] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.284 [2024-06-11 08:23:16.862353] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.284 [2024-06-11 08:23:16.862507] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.284 [2024-06-11 08:23:16.862517] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.284 [2024-06-11 08:23:16.862525] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.284 [2024-06-11 08:23:16.864837] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.284 [2024-06-11 08:23:16.873618] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.284 [2024-06-11 08:23:16.874175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.284 [2024-06-11 08:23:16.874556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.284 [2024-06-11 08:23:16.874571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.284 [2024-06-11 08:23:16.874580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.284 [2024-06-11 08:23:16.874743] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.284 [2024-06-11 08:23:16.874908] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.284 [2024-06-11 08:23:16.874918] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.284 [2024-06-11 08:23:16.874926] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.284 [2024-06-11 08:23:16.877314] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.284 [2024-06-11 08:23:16.886183] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.284 [2024-06-11 08:23:16.886690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.284 [2024-06-11 08:23:16.886997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.284 [2024-06-11 08:23:16.887008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.284 [2024-06-11 08:23:16.887016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.284 [2024-06-11 08:23:16.887141] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.284 [2024-06-11 08:23:16.887322] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.284 [2024-06-11 08:23:16.887330] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.284 [2024-06-11 08:23:16.887337] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.284 [2024-06-11 08:23:16.889623] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.284 [2024-06-11 08:23:16.898807] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.284 [2024-06-11 08:23:16.899106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.284 [2024-06-11 08:23:16.899408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.284 [2024-06-11 08:23:16.899424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.284 [2024-06-11 08:23:16.899432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.284 [2024-06-11 08:23:16.899637] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.284 [2024-06-11 08:23:16.899835] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.284 [2024-06-11 08:23:16.899844] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.284 [2024-06-11 08:23:16.899852] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.284 [2024-06-11 08:23:16.902211] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.284 [2024-06-11 08:23:16.911330] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.284 [2024-06-11 08:23:16.911868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.284 [2024-06-11 08:23:16.913080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.284 [2024-06-11 08:23:16.913102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.284 [2024-06-11 08:23:16.913112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.284 [2024-06-11 08:23:16.913258] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.284 [2024-06-11 08:23:16.913449] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.284 [2024-06-11 08:23:16.913458] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.284 [2024-06-11 08:23:16.913467] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.284 [2024-06-11 08:23:16.915759] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.284 [2024-06-11 08:23:16.923631] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.284 [2024-06-11 08:23:16.924145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.284 [2024-06-11 08:23:16.924487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.284 [2024-06-11 08:23:16.924501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.284 [2024-06-11 08:23:16.924510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.284 [2024-06-11 08:23:16.924637] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.284 [2024-06-11 08:23:16.924763] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.284 [2024-06-11 08:23:16.924771] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.284 [2024-06-11 08:23:16.924778] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.548 [2024-06-11 08:23:16.927087] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.548 [2024-06-11 08:23:16.936242] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.548 [2024-06-11 08:23:16.936710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.548 [2024-06-11 08:23:16.937035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.548 [2024-06-11 08:23:16.937046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.548 [2024-06-11 08:23:16.937058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.548 [2024-06-11 08:23:16.937165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.548 [2024-06-11 08:23:16.937310] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.548 [2024-06-11 08:23:16.937322] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.548 [2024-06-11 08:23:16.937331] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.548 [2024-06-11 08:23:16.939693] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.548 [2024-06-11 08:23:16.948768] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.548 [2024-06-11 08:23:16.949375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.548 [2024-06-11 08:23:16.949607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.548 [2024-06-11 08:23:16.949621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.548 [2024-06-11 08:23:16.949631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.548 [2024-06-11 08:23:16.949793] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.548 [2024-06-11 08:23:16.949923] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.548 [2024-06-11 08:23:16.949932] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.548 [2024-06-11 08:23:16.949939] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.548 [2024-06-11 08:23:16.951958] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.548 [2024-06-11 08:23:16.961222] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.548 [2024-06-11 08:23:16.961575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.548 [2024-06-11 08:23:16.961862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.548 [2024-06-11 08:23:16.961873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.548 [2024-06-11 08:23:16.961882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.548 [2024-06-11 08:23:16.961990] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.548 [2024-06-11 08:23:16.962151] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.548 [2024-06-11 08:23:16.962159] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.548 [2024-06-11 08:23:16.962167] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.548 [2024-06-11 08:23:16.964295] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.548 [2024-06-11 08:23:16.973823] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.548 [2024-06-11 08:23:16.974245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.548 [2024-06-11 08:23:16.974561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.548 [2024-06-11 08:23:16.974573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.548 [2024-06-11 08:23:16.974580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.548 [2024-06-11 08:23:16.974728] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.548 [2024-06-11 08:23:16.974872] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.548 [2024-06-11 08:23:16.974881] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.548 [2024-06-11 08:23:16.974888] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.548 [2024-06-11 08:23:16.977103] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.548 [2024-06-11 08:23:16.986353] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.548 [2024-06-11 08:23:16.986832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.548 [2024-06-11 08:23:16.987132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.548 [2024-06-11 08:23:16.987142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.548 [2024-06-11 08:23:16.987150] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.548 [2024-06-11 08:23:16.987348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.548 [2024-06-11 08:23:16.987459] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.548 [2024-06-11 08:23:16.987469] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.548 [2024-06-11 08:23:16.987476] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.548 [2024-06-11 08:23:16.989701] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.548 [2024-06-11 08:23:16.998748] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.548 [2024-06-11 08:23:16.999220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.548 [2024-06-11 08:23:16.999520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.548 [2024-06-11 08:23:16.999531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.548 [2024-06-11 08:23:16.999539] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.548 [2024-06-11 08:23:16.999682] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.548 [2024-06-11 08:23:16.999825] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.548 [2024-06-11 08:23:16.999834] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.548 [2024-06-11 08:23:16.999841] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.548 [2024-06-11 08:23:17.002054] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.548 [2024-06-11 08:23:17.011246] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.548 [2024-06-11 08:23:17.011696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.548 [2024-06-11 08:23:17.011994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.548 [2024-06-11 08:23:17.012005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.548 [2024-06-11 08:23:17.012012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.548 [2024-06-11 08:23:17.012173] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.548 [2024-06-11 08:23:17.012265] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.548 [2024-06-11 08:23:17.012273] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.548 [2024-06-11 08:23:17.012280] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.548 [2024-06-11 08:23:17.014477] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.548 [2024-06-11 08:23:17.023760] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.548 [2024-06-11 08:23:17.024242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.548 [2024-06-11 08:23:17.024577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.548 [2024-06-11 08:23:17.024589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.548 [2024-06-11 08:23:17.024596] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.548 [2024-06-11 08:23:17.024721] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.548 [2024-06-11 08:23:17.024885] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.548 [2024-06-11 08:23:17.024894] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.548 [2024-06-11 08:23:17.024901] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.548 [2024-06-11 08:23:17.027204] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.548 [2024-06-11 08:23:17.036362] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.548 [2024-06-11 08:23:17.036830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.548 [2024-06-11 08:23:17.037037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.548 [2024-06-11 08:23:17.037047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.548 [2024-06-11 08:23:17.037055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.548 [2024-06-11 08:23:17.037179] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.548 [2024-06-11 08:23:17.037304] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.548 [2024-06-11 08:23:17.037314] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.549 [2024-06-11 08:23:17.037320] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.549 [2024-06-11 08:23:17.039630] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.549 [2024-06-11 08:23:17.048757] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.549 [2024-06-11 08:23:17.049250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.549 [2024-06-11 08:23:17.049625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.549 [2024-06-11 08:23:17.049636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.549 [2024-06-11 08:23:17.049644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.549 [2024-06-11 08:23:17.049787] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.549 [2024-06-11 08:23:17.049967] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.549 [2024-06-11 08:23:17.049979] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.549 [2024-06-11 08:23:17.049987] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.549 [2024-06-11 08:23:17.052185] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.549 [2024-06-11 08:23:17.061184] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.549 [2024-06-11 08:23:17.061759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.549 [2024-06-11 08:23:17.062090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.549 [2024-06-11 08:23:17.062104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.549 [2024-06-11 08:23:17.062114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.549 [2024-06-11 08:23:17.062276] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.549 [2024-06-11 08:23:17.062404] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.549 [2024-06-11 08:23:17.062414] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.549 [2024-06-11 08:23:17.062421] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.549 [2024-06-11 08:23:17.064809] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.549 [2024-06-11 08:23:17.073592] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.549 [2024-06-11 08:23:17.074079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.549 [2024-06-11 08:23:17.074260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.549 [2024-06-11 08:23:17.074271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.549 [2024-06-11 08:23:17.074279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.549 [2024-06-11 08:23:17.074423] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.549 [2024-06-11 08:23:17.074554] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.549 [2024-06-11 08:23:17.074563] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.549 [2024-06-11 08:23:17.074570] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.549 [2024-06-11 08:23:17.076915] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.549 [2024-06-11 08:23:17.086175] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.549 [2024-06-11 08:23:17.086652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.549 [2024-06-11 08:23:17.087467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.549 [2024-06-11 08:23:17.087490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.549 [2024-06-11 08:23:17.087498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.549 [2024-06-11 08:23:17.087647] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.549 [2024-06-11 08:23:17.087756] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.549 [2024-06-11 08:23:17.087765] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.549 [2024-06-11 08:23:17.087776] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.549 [2024-06-11 08:23:17.090052] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.549 [2024-06-11 08:23:17.098632] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.549 [2024-06-11 08:23:17.099111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.549 [2024-06-11 08:23:17.099448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.549 [2024-06-11 08:23:17.099460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.549 [2024-06-11 08:23:17.099468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.549 [2024-06-11 08:23:17.099647] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.549 [2024-06-11 08:23:17.099774] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.549 [2024-06-11 08:23:17.099783] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.549 [2024-06-11 08:23:17.099790] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.549 [2024-06-11 08:23:17.102166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.549 [2024-06-11 08:23:17.111155] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.549 [2024-06-11 08:23:17.111554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.549 [2024-06-11 08:23:17.111874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.549 [2024-06-11 08:23:17.111885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.549 [2024-06-11 08:23:17.111893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.549 [2024-06-11 08:23:17.112017] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.549 [2024-06-11 08:23:17.112143] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.549 [2024-06-11 08:23:17.112152] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.549 [2024-06-11 08:23:17.112159] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.549 [2024-06-11 08:23:17.114577] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.549 [2024-06-11 08:23:17.123692] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.549 [2024-06-11 08:23:17.124133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.549 [2024-06-11 08:23:17.124449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.549 [2024-06-11 08:23:17.124461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.549 [2024-06-11 08:23:17.124469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.549 [2024-06-11 08:23:17.124538] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.549 [2024-06-11 08:23:17.124681] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.549 [2024-06-11 08:23:17.124690] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.549 [2024-06-11 08:23:17.124697] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.549 [2024-06-11 08:23:17.126931] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.549 [2024-06-11 08:23:17.136371] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.549 [2024-06-11 08:23:17.136856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.549 [2024-06-11 08:23:17.137240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.549 [2024-06-11 08:23:17.137254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.549 [2024-06-11 08:23:17.137263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.549 [2024-06-11 08:23:17.137407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.549 [2024-06-11 08:23:17.137598] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.549 [2024-06-11 08:23:17.137608] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.549 [2024-06-11 08:23:17.137616] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.549 [2024-06-11 08:23:17.139794] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.549 [2024-06-11 08:23:17.148827] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.549 [2024-06-11 08:23:17.149421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.549 [2024-06-11 08:23:17.149668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.549 [2024-06-11 08:23:17.149681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.549 [2024-06-11 08:23:17.149691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.550 [2024-06-11 08:23:17.149853] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.550 [2024-06-11 08:23:17.149984] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.550 [2024-06-11 08:23:17.149993] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.550 [2024-06-11 08:23:17.150001] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.550 [2024-06-11 08:23:17.152132] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.550 [2024-06-11 08:23:17.161289] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.550 [2024-06-11 08:23:17.161767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.550 [2024-06-11 08:23:17.162110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.550 [2024-06-11 08:23:17.162122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.550 [2024-06-11 08:23:17.162130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.550 [2024-06-11 08:23:17.162293] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.550 [2024-06-11 08:23:17.162382] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.550 [2024-06-11 08:23:17.162390] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.550 [2024-06-11 08:23:17.162397] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.550 [2024-06-11 08:23:17.164763] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.550 [2024-06-11 08:23:17.173705] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.550 [2024-06-11 08:23:17.174168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.550 [2024-06-11 08:23:17.174505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.550 [2024-06-11 08:23:17.174517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.550 [2024-06-11 08:23:17.174524] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.550 [2024-06-11 08:23:17.174650] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.550 [2024-06-11 08:23:17.174777] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.550 [2024-06-11 08:23:17.174786] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.550 [2024-06-11 08:23:17.174792] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.550 [2024-06-11 08:23:17.177099] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.550 [2024-06-11 08:23:17.186368] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.550 [2024-06-11 08:23:17.186889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.550 [2024-06-11 08:23:17.187219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.550 [2024-06-11 08:23:17.187233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.550 [2024-06-11 08:23:17.187243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.550 [2024-06-11 08:23:17.187368] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.550 [2024-06-11 08:23:17.187503] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.550 [2024-06-11 08:23:17.187512] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.550 [2024-06-11 08:23:17.187520] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.550 [2024-06-11 08:23:17.189720] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.813 [2024-06-11 08:23:17.198954] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.813 [2024-06-11 08:23:17.199291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.813 [2024-06-11 08:23:17.199625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.813 [2024-06-11 08:23:17.199637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.813 [2024-06-11 08:23:17.199645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.813 [2024-06-11 08:23:17.199752] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.813 [2024-06-11 08:23:17.199859] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.813 [2024-06-11 08:23:17.199867] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.813 [2024-06-11 08:23:17.199874] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.813 [2024-06-11 08:23:17.202031] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.813 [2024-06-11 08:23:17.211520] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.813 [2024-06-11 08:23:17.211877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.813 [2024-06-11 08:23:17.212152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.813 [2024-06-11 08:23:17.212163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.813 [2024-06-11 08:23:17.212170] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.813 [2024-06-11 08:23:17.212295] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.813 [2024-06-11 08:23:17.212420] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.813 [2024-06-11 08:23:17.212428] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.813 [2024-06-11 08:23:17.212436] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.813 [2024-06-11 08:23:17.214673] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.813 [2024-06-11 08:23:17.224019] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.813 [2024-06-11 08:23:17.224569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.813 [2024-06-11 08:23:17.224916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.813 [2024-06-11 08:23:17.224930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.813 [2024-06-11 08:23:17.224939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.813 [2024-06-11 08:23:17.225102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.813 [2024-06-11 08:23:17.225269] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.813 [2024-06-11 08:23:17.225278] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.813 [2024-06-11 08:23:17.225287] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.813 [2024-06-11 08:23:17.227660] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.813 [2024-06-11 08:23:17.236489] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.813 [2024-06-11 08:23:17.236986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.813 [2024-06-11 08:23:17.237297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.813 [2024-06-11 08:23:17.237309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.813 [2024-06-11 08:23:17.237318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.813 [2024-06-11 08:23:17.237449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.813 [2024-06-11 08:23:17.237557] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.813 [2024-06-11 08:23:17.237566] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.813 [2024-06-11 08:23:17.237573] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.813 [2024-06-11 08:23:17.239825] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.813 [2024-06-11 08:23:17.248925] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.813 [2024-06-11 08:23:17.249525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.813 [2024-06-11 08:23:17.249866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.813 [2024-06-11 08:23:17.249881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.813 [2024-06-11 08:23:17.249890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.813 [2024-06-11 08:23:17.250015] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.813 [2024-06-11 08:23:17.250180] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.813 [2024-06-11 08:23:17.250189] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.813 [2024-06-11 08:23:17.250197] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.814 [2024-06-11 08:23:17.252347] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.814 [2024-06-11 08:23:17.261283] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.814 [2024-06-11 08:23:17.261776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.814 [2024-06-11 08:23:17.261973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.814 [2024-06-11 08:23:17.261984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.814 [2024-06-11 08:23:17.261994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.814 [2024-06-11 08:23:17.262101] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.814 [2024-06-11 08:23:17.262263] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.814 [2024-06-11 08:23:17.262273] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.814 [2024-06-11 08:23:17.262280] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.814 [2024-06-11 08:23:17.264501] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.814 [2024-06-11 08:23:17.273674] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.814 [2024-06-11 08:23:17.274139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.814 [2024-06-11 08:23:17.274446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.814 [2024-06-11 08:23:17.274457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.814 [2024-06-11 08:23:17.274465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.814 [2024-06-11 08:23:17.274608] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.814 [2024-06-11 08:23:17.274732] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.814 [2024-06-11 08:23:17.274741] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.814 [2024-06-11 08:23:17.274748] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.814 [2024-06-11 08:23:17.277072] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.814 [2024-06-11 08:23:17.286370] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.814 [2024-06-11 08:23:17.286846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.814 [2024-06-11 08:23:17.287185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.814 [2024-06-11 08:23:17.287196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.814 [2024-06-11 08:23:17.287207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.814 [2024-06-11 08:23:17.287350] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.814 [2024-06-11 08:23:17.287535] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.814 [2024-06-11 08:23:17.287545] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.814 [2024-06-11 08:23:17.287553] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.814 [2024-06-11 08:23:17.289909] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.814 [2024-06-11 08:23:17.298801] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.814 [2024-06-11 08:23:17.299232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.814 [2024-06-11 08:23:17.299573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.814 [2024-06-11 08:23:17.299584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.814 [2024-06-11 08:23:17.299592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.814 [2024-06-11 08:23:17.299699] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.814 [2024-06-11 08:23:17.299804] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.814 [2024-06-11 08:23:17.299813] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.814 [2024-06-11 08:23:17.299820] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.814 [2024-06-11 08:23:17.301996] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.814 [2024-06-11 08:23:17.311091] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.814 [2024-06-11 08:23:17.311565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.814 [2024-06-11 08:23:17.311884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.814 [2024-06-11 08:23:17.311895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.814 [2024-06-11 08:23:17.311903] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.814 [2024-06-11 08:23:17.312063] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.814 [2024-06-11 08:23:17.312189] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.814 [2024-06-11 08:23:17.312197] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.814 [2024-06-11 08:23:17.312205] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.814 [2024-06-11 08:23:17.314417] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.814 [2024-06-11 08:23:17.323644] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.814 [2024-06-11 08:23:17.324146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.814 [2024-06-11 08:23:17.324483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.814 [2024-06-11 08:23:17.324494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.814 [2024-06-11 08:23:17.324502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.814 [2024-06-11 08:23:17.324648] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.814 [2024-06-11 08:23:17.324773] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.814 [2024-06-11 08:23:17.324781] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.814 [2024-06-11 08:23:17.324788] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.814 [2024-06-11 08:23:17.326837] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.814 [2024-06-11 08:23:17.336107] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.814 [2024-06-11 08:23:17.336475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.814 [2024-06-11 08:23:17.336653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.814 [2024-06-11 08:23:17.336663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.814 [2024-06-11 08:23:17.336671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.814 [2024-06-11 08:23:17.336832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.814 [2024-06-11 08:23:17.336976] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.814 [2024-06-11 08:23:17.336986] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.814 [2024-06-11 08:23:17.336993] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.814 [2024-06-11 08:23:17.339259] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.814 [2024-06-11 08:23:17.348702] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.814 [2024-06-11 08:23:17.349204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.814 [2024-06-11 08:23:17.349517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.814 [2024-06-11 08:23:17.349529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.814 [2024-06-11 08:23:17.349537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.814 [2024-06-11 08:23:17.349700] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.814 [2024-06-11 08:23:17.349826] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.814 [2024-06-11 08:23:17.349835] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.814 [2024-06-11 08:23:17.349842] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.814 [2024-06-11 08:23:17.352124] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.814 [2024-06-11 08:23:17.361273] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.814 [2024-06-11 08:23:17.361716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.814 [2024-06-11 08:23:17.362051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.814 [2024-06-11 08:23:17.362061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.814 [2024-06-11 08:23:17.362068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.814 [2024-06-11 08:23:17.362174] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.814 [2024-06-11 08:23:17.362322] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.814 [2024-06-11 08:23:17.362330] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.814 [2024-06-11 08:23:17.362338] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.814 [2024-06-11 08:23:17.364515] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.814 [2024-06-11 08:23:17.373923] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.815 [2024-06-11 08:23:17.374390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.815 [2024-06-11 08:23:17.374686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.815 [2024-06-11 08:23:17.374697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.815 [2024-06-11 08:23:17.374705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.815 [2024-06-11 08:23:17.374922] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.815 [2024-06-11 08:23:17.375029] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.815 [2024-06-11 08:23:17.375037] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.815 [2024-06-11 08:23:17.375044] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.815 [2024-06-11 08:23:17.377364] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.815 [2024-06-11 08:23:17.386451] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.815 [2024-06-11 08:23:17.387076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.815 [2024-06-11 08:23:17.387333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.815 [2024-06-11 08:23:17.387347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.815 [2024-06-11 08:23:17.387356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.815 [2024-06-11 08:23:17.387507] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.815 [2024-06-11 08:23:17.387636] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.815 [2024-06-11 08:23:17.387646] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.815 [2024-06-11 08:23:17.387653] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.815 [2024-06-11 08:23:17.390068] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.815 [2024-06-11 08:23:17.398967] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.815 [2024-06-11 08:23:17.399383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.815 [2024-06-11 08:23:17.399723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.815 [2024-06-11 08:23:17.399734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.815 [2024-06-11 08:23:17.399743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.815 [2024-06-11 08:23:17.399849] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.815 [2024-06-11 08:23:17.399975] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.815 [2024-06-11 08:23:17.399988] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.815 [2024-06-11 08:23:17.399995] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.815 [2024-06-11 08:23:17.402282] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.815 [2024-06-11 08:23:17.411603] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.815 [2024-06-11 08:23:17.412023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.815 [2024-06-11 08:23:17.412357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.815 [2024-06-11 08:23:17.412367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.815 [2024-06-11 08:23:17.412375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.815 [2024-06-11 08:23:17.412542] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.815 [2024-06-11 08:23:17.412667] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.815 [2024-06-11 08:23:17.412676] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.815 [2024-06-11 08:23:17.412683] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.815 [2024-06-11 08:23:17.415024] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.815 [2024-06-11 08:23:17.424095] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.815 [2024-06-11 08:23:17.424711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.815 [2024-06-11 08:23:17.425085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.815 [2024-06-11 08:23:17.425099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.815 [2024-06-11 08:23:17.425109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.815 [2024-06-11 08:23:17.425290] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.815 [2024-06-11 08:23:17.425481] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.815 [2024-06-11 08:23:17.425492] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.815 [2024-06-11 08:23:17.425500] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.815 [2024-06-11 08:23:17.427975] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.815 [2024-06-11 08:23:17.436716] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.815 [2024-06-11 08:23:17.437637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.815 [2024-06-11 08:23:17.437973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.815 [2024-06-11 08:23:17.437988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.815 [2024-06-11 08:23:17.437998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.815 [2024-06-11 08:23:17.438161] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.815 [2024-06-11 08:23:17.438308] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.815 [2024-06-11 08:23:17.438317] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.815 [2024-06-11 08:23:17.438329] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.815 [2024-06-11 08:23:17.440683] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.815 [2024-06-11 08:23:17.449136] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.815 [2024-06-11 08:23:17.449606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.815 [2024-06-11 08:23:17.449901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.815 [2024-06-11 08:23:17.449912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:46.815 [2024-06-11 08:23:17.449920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:46.815 [2024-06-11 08:23:17.450064] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:46.815 [2024-06-11 08:23:17.450170] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:46.815 [2024-06-11 08:23:17.450178] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:46.815 [2024-06-11 08:23:17.450185] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.815 [2024-06-11 08:23:17.452398] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.077 [2024-06-11 08:23:17.461807] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.077 [2024-06-11 08:23:17.462258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.077 [2024-06-11 08:23:17.462573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.077 [2024-06-11 08:23:17.462585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.077 [2024-06-11 08:23:17.462593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.077 [2024-06-11 08:23:17.462717] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.077 [2024-06-11 08:23:17.462841] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.078 [2024-06-11 08:23:17.462850] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.078 [2024-06-11 08:23:17.462857] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.078 [2024-06-11 08:23:17.465165] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.078 [2024-06-11 08:23:17.474267] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.078 [2024-06-11 08:23:17.474732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.078 [2024-06-11 08:23:17.475049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.078 [2024-06-11 08:23:17.475060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.078 [2024-06-11 08:23:17.475068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.078 [2024-06-11 08:23:17.475213] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.078 [2024-06-11 08:23:17.475357] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.078 [2024-06-11 08:23:17.475365] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.078 [2024-06-11 08:23:17.475372] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.078 [2024-06-11 08:23:17.477756] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.078 [2024-06-11 08:23:17.486592] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.078 [2024-06-11 08:23:17.487037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.078 [2024-06-11 08:23:17.487871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.078 [2024-06-11 08:23:17.487893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.078 [2024-06-11 08:23:17.487901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.078 [2024-06-11 08:23:17.487996] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.078 [2024-06-11 08:23:17.488103] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.078 [2024-06-11 08:23:17.488113] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.078 [2024-06-11 08:23:17.488121] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.078 [2024-06-11 08:23:17.490283] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.078 [2024-06-11 08:23:17.499140] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.078 [2024-06-11 08:23:17.499609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.078 [2024-06-11 08:23:17.499945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.078 [2024-06-11 08:23:17.499956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.078 [2024-06-11 08:23:17.499963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.078 [2024-06-11 08:23:17.500107] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.078 [2024-06-11 08:23:17.500306] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.078 [2024-06-11 08:23:17.500316] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.078 [2024-06-11 08:23:17.500323] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.078 [2024-06-11 08:23:17.502518] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.078 [2024-06-11 08:23:17.511671] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.078 [2024-06-11 08:23:17.512149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.078 [2024-06-11 08:23:17.512329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.078 [2024-06-11 08:23:17.512341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.078 [2024-06-11 08:23:17.512349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.078 [2024-06-11 08:23:17.512516] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.078 [2024-06-11 08:23:17.512624] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.078 [2024-06-11 08:23:17.512633] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.078 [2024-06-11 08:23:17.512640] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.078 [2024-06-11 08:23:17.514930] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.078 [2024-06-11 08:23:17.524217] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.078 [2024-06-11 08:23:17.524781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.078 [2024-06-11 08:23:17.525164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.078 [2024-06-11 08:23:17.525179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.078 [2024-06-11 08:23:17.525188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.078 [2024-06-11 08:23:17.525387] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.078 [2024-06-11 08:23:17.525558] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.078 [2024-06-11 08:23:17.525568] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.078 [2024-06-11 08:23:17.525575] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.078 [2024-06-11 08:23:17.527700] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.078 [2024-06-11 08:23:17.536579] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.078 [2024-06-11 08:23:17.537204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.078 [2024-06-11 08:23:17.537557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.078 [2024-06-11 08:23:17.537573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.078 [2024-06-11 08:23:17.537583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.078 [2024-06-11 08:23:17.537708] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.078 [2024-06-11 08:23:17.537893] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.078 [2024-06-11 08:23:17.537902] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.078 [2024-06-11 08:23:17.537910] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.078 [2024-06-11 08:23:17.540022] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.078 [2024-06-11 08:23:17.549121] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.078 [2024-06-11 08:23:17.549558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.078 [2024-06-11 08:23:17.549877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.078 [2024-06-11 08:23:17.549888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.078 [2024-06-11 08:23:17.549896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.078 [2024-06-11 08:23:17.550057] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.078 [2024-06-11 08:23:17.550165] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.078 [2024-06-11 08:23:17.550174] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.078 [2024-06-11 08:23:17.550181] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.078 [2024-06-11 08:23:17.552565] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.078 [2024-06-11 08:23:17.561642] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.078 [2024-06-11 08:23:17.562098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.078 [2024-06-11 08:23:17.562399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.078 [2024-06-11 08:23:17.562409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.078 [2024-06-11 08:23:17.562417] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.078 [2024-06-11 08:23:17.562584] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.078 [2024-06-11 08:23:17.562728] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.078 [2024-06-11 08:23:17.562737] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.078 [2024-06-11 08:23:17.562744] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.079 [2024-06-11 08:23:17.564994] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.079 [2024-06-11 08:23:17.574179] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.079 [2024-06-11 08:23:17.574800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.079 [2024-06-11 08:23:17.575139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.079 [2024-06-11 08:23:17.575153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.079 [2024-06-11 08:23:17.575163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.079 [2024-06-11 08:23:17.575343] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.079 [2024-06-11 08:23:17.575479] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.079 [2024-06-11 08:23:17.575488] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.079 [2024-06-11 08:23:17.575497] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.079 [2024-06-11 08:23:17.577718] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.079 [2024-06-11 08:23:17.586643] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.079 [2024-06-11 08:23:17.587101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.079 [2024-06-11 08:23:17.587418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.079 [2024-06-11 08:23:17.587429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.079 [2024-06-11 08:23:17.587443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.079 [2024-06-11 08:23:17.587569] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.079 [2024-06-11 08:23:17.587730] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.079 [2024-06-11 08:23:17.587739] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.079 [2024-06-11 08:23:17.587746] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.079 [2024-06-11 08:23:17.589940] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.079 [2024-06-11 08:23:17.599045] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.079 [2024-06-11 08:23:17.599583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.079 [2024-06-11 08:23:17.599933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.079 [2024-06-11 08:23:17.599948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.079 [2024-06-11 08:23:17.599957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.079 [2024-06-11 08:23:17.600120] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.079 [2024-06-11 08:23:17.600249] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.079 [2024-06-11 08:23:17.600259] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.079 [2024-06-11 08:23:17.600266] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.079 [2024-06-11 08:23:17.602695] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.079 [2024-06-11 08:23:17.611479] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.079 [2024-06-11 08:23:17.611971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.079 [2024-06-11 08:23:17.612245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.079 [2024-06-11 08:23:17.612255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.079 [2024-06-11 08:23:17.612263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.079 [2024-06-11 08:23:17.612388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.079 [2024-06-11 08:23:17.612521] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.079 [2024-06-11 08:23:17.612530] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.079 [2024-06-11 08:23:17.612537] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.079 [2024-06-11 08:23:17.614825] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.079 [2024-06-11 08:23:17.624028] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.079 [2024-06-11 08:23:17.624540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.079 [2024-06-11 08:23:17.624915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.079 [2024-06-11 08:23:17.624929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.079 [2024-06-11 08:23:17.624938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.079 [2024-06-11 08:23:17.625118] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.079 [2024-06-11 08:23:17.625302] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.079 [2024-06-11 08:23:17.625312] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.079 [2024-06-11 08:23:17.625320] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.079 [2024-06-11 08:23:17.627601] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.079 [2024-06-11 08:23:17.636508] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.079 [2024-06-11 08:23:17.636948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.079 [2024-06-11 08:23:17.637321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.079 [2024-06-11 08:23:17.637335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.079 [2024-06-11 08:23:17.637348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.079 [2024-06-11 08:23:17.637522] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.079 [2024-06-11 08:23:17.637652] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.079 [2024-06-11 08:23:17.637661] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.079 [2024-06-11 08:23:17.637669] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.079 [2024-06-11 08:23:17.640016] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.079 [2024-06-11 08:23:17.649082] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.079 [2024-06-11 08:23:17.649655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.079 [2024-06-11 08:23:17.649993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.079 [2024-06-11 08:23:17.650007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.079 [2024-06-11 08:23:17.650017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.079 [2024-06-11 08:23:17.650215] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.079 [2024-06-11 08:23:17.650417] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.079 [2024-06-11 08:23:17.650427] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.079 [2024-06-11 08:23:17.650434] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.079 [2024-06-11 08:23:17.652641] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.079 [2024-06-11 08:23:17.661630] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.079 [2024-06-11 08:23:17.662268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.079 [2024-06-11 08:23:17.662682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.079 [2024-06-11 08:23:17.662698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.079 [2024-06-11 08:23:17.662707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.079 [2024-06-11 08:23:17.662832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.079 [2024-06-11 08:23:17.662998] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.079 [2024-06-11 08:23:17.663007] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.079 [2024-06-11 08:23:17.663015] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.079 [2024-06-11 08:23:17.665256] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.079 [2024-06-11 08:23:17.674140] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.079 [2024-06-11 08:23:17.674757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.079 [2024-06-11 08:23:17.675124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.079 [2024-06-11 08:23:17.675138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.079 [2024-06-11 08:23:17.675148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.080 [2024-06-11 08:23:17.675314] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.080 [2024-06-11 08:23:17.675469] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.080 [2024-06-11 08:23:17.675479] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.080 [2024-06-11 08:23:17.675486] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.080 [2024-06-11 08:23:17.677894] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.080 [2024-06-11 08:23:17.686508] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.080 [2024-06-11 08:23:17.687171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.080 [2024-06-11 08:23:17.687549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.080 [2024-06-11 08:23:17.687564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.080 [2024-06-11 08:23:17.687574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.080 [2024-06-11 08:23:17.687736] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.080 [2024-06-11 08:23:17.687919] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.080 [2024-06-11 08:23:17.687928] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.080 [2024-06-11 08:23:17.687935] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.080 [2024-06-11 08:23:17.690230] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.080 [2024-06-11 08:23:17.698963] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.080 [2024-06-11 08:23:17.699552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.080 [2024-06-11 08:23:17.699901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.080 [2024-06-11 08:23:17.699915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.080 [2024-06-11 08:23:17.699924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.080 [2024-06-11 08:23:17.700049] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.080 [2024-06-11 08:23:17.700178] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.080 [2024-06-11 08:23:17.700187] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.080 [2024-06-11 08:23:17.700194] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.080 [2024-06-11 08:23:17.702510] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.080 [2024-06-11 08:23:17.711374] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.080 [2024-06-11 08:23:17.711827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.080 [2024-06-11 08:23:17.712163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.080 [2024-06-11 08:23:17.712174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.080 [2024-06-11 08:23:17.712182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.080 [2024-06-11 08:23:17.712345] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.080 [2024-06-11 08:23:17.712499] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.080 [2024-06-11 08:23:17.712509] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.080 [2024-06-11 08:23:17.712519] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.080 [2024-06-11 08:23:17.714921] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.342 [2024-06-11 08:23:17.724049] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.342 [2024-06-11 08:23:17.724563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.342 [2024-06-11 08:23:17.724892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.342 [2024-06-11 08:23:17.724903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.342 [2024-06-11 08:23:17.724911] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.342 [2024-06-11 08:23:17.725035] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.342 [2024-06-11 08:23:17.725160] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.342 [2024-06-11 08:23:17.725168] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.342 [2024-06-11 08:23:17.725175] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.342 [2024-06-11 08:23:17.727497] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.342 [2024-06-11 08:23:17.736494] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.342 [2024-06-11 08:23:17.737043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.342 [2024-06-11 08:23:17.737422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.342 [2024-06-11 08:23:17.737436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.342 [2024-06-11 08:23:17.737455] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.342 [2024-06-11 08:23:17.737617] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.342 [2024-06-11 08:23:17.737727] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.342 [2024-06-11 08:23:17.737737] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.342 [2024-06-11 08:23:17.737745] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.342 [2024-06-11 08:23:17.739909] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.342 [2024-06-11 08:23:17.749077] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.342 [2024-06-11 08:23:17.749540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.342 [2024-06-11 08:23:17.749918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.342 [2024-06-11 08:23:17.749933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.342 [2024-06-11 08:23:17.749942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.342 [2024-06-11 08:23:17.750087] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.343 [2024-06-11 08:23:17.750234] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.343 [2024-06-11 08:23:17.750248] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.343 [2024-06-11 08:23:17.750256] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.343 [2024-06-11 08:23:17.752367] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.343 [2024-06-11 08:23:17.761506] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.343 [2024-06-11 08:23:17.762092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.343 [2024-06-11 08:23:17.762423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.343 [2024-06-11 08:23:17.762444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.343 [2024-06-11 08:23:17.762454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.343 [2024-06-11 08:23:17.762579] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.343 [2024-06-11 08:23:17.762745] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.343 [2024-06-11 08:23:17.762754] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.343 [2024-06-11 08:23:17.762762] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.343 [2024-06-11 08:23:17.765031] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.343 [2024-06-11 08:23:17.774000] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.343 [2024-06-11 08:23:17.774545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.343 [2024-06-11 08:23:17.774927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.343 [2024-06-11 08:23:17.774941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.343 [2024-06-11 08:23:17.774950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.343 [2024-06-11 08:23:17.775113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.343 [2024-06-11 08:23:17.775223] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.343 [2024-06-11 08:23:17.775232] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.343 [2024-06-11 08:23:17.775240] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.343 [2024-06-11 08:23:17.777356] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.343 [2024-06-11 08:23:17.786378] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.343 [2024-06-11 08:23:17.786948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.343 [2024-06-11 08:23:17.787327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.343 [2024-06-11 08:23:17.787341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.343 [2024-06-11 08:23:17.787351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.343 [2024-06-11 08:23:17.787578] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.343 [2024-06-11 08:23:17.787744] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.343 [2024-06-11 08:23:17.787753] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.343 [2024-06-11 08:23:17.787766] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.343 [2024-06-11 08:23:17.790002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.343 [2024-06-11 08:23:17.798987] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.343 [2024-06-11 08:23:17.799540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.343 [2024-06-11 08:23:17.799868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.343 [2024-06-11 08:23:17.799881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.343 [2024-06-11 08:23:17.799891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.343 [2024-06-11 08:23:17.800053] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.343 [2024-06-11 08:23:17.800217] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.343 [2024-06-11 08:23:17.800226] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.343 [2024-06-11 08:23:17.800234] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.343 [2024-06-11 08:23:17.802404] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.343 [2024-06-11 08:23:17.811479] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.343 [2024-06-11 08:23:17.811958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.343 [2024-06-11 08:23:17.812291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.343 [2024-06-11 08:23:17.812302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.343 [2024-06-11 08:23:17.812310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.343 [2024-06-11 08:23:17.812435] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.343 [2024-06-11 08:23:17.812586] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.343 [2024-06-11 08:23:17.812595] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.343 [2024-06-11 08:23:17.812603] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.343 [2024-06-11 08:23:17.814776] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.343 [2024-06-11 08:23:17.824082] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.343 [2024-06-11 08:23:17.824707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.343 [2024-06-11 08:23:17.825081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.343 [2024-06-11 08:23:17.825095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.343 [2024-06-11 08:23:17.825104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.343 [2024-06-11 08:23:17.825286] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.343 [2024-06-11 08:23:17.825478] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.343 [2024-06-11 08:23:17.825488] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.343 [2024-06-11 08:23:17.825495] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.343 [2024-06-11 08:23:17.827882] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.343 [2024-06-11 08:23:17.836538] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.343 [2024-06-11 08:23:17.837146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.343 [2024-06-11 08:23:17.837506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.343 [2024-06-11 08:23:17.837521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.343 [2024-06-11 08:23:17.837531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.343 [2024-06-11 08:23:17.837674] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.343 [2024-06-11 08:23:17.837857] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.343 [2024-06-11 08:23:17.837866] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.343 [2024-06-11 08:23:17.837874] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.343 [2024-06-11 08:23:17.840073] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.343 [2024-06-11 08:23:17.848918] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.343 [2024-06-11 08:23:17.849368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.343 [2024-06-11 08:23:17.849567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.343 [2024-06-11 08:23:17.849581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.343 [2024-06-11 08:23:17.849589] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.343 [2024-06-11 08:23:17.849732] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.343 [2024-06-11 08:23:17.849821] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.343 [2024-06-11 08:23:17.849831] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.343 [2024-06-11 08:23:17.849838] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.343 [2024-06-11 08:23:17.852072] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.343 [2024-06-11 08:23:17.861531] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.343 [2024-06-11 08:23:17.862063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.343 [2024-06-11 08:23:17.862445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.343 [2024-06-11 08:23:17.862460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.344 [2024-06-11 08:23:17.862469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.344 [2024-06-11 08:23:17.862649] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.344 [2024-06-11 08:23:17.862814] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.344 [2024-06-11 08:23:17.862823] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.344 [2024-06-11 08:23:17.862831] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.344 [2024-06-11 08:23:17.865020] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.344 [2024-06-11 08:23:17.874104] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.344 [2024-06-11 08:23:17.874733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.344 [2024-06-11 08:23:17.875108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.344 [2024-06-11 08:23:17.875122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.344 [2024-06-11 08:23:17.875131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.344 [2024-06-11 08:23:17.875294] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.344 [2024-06-11 08:23:17.875468] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.344 [2024-06-11 08:23:17.875478] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.344 [2024-06-11 08:23:17.875486] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.344 [2024-06-11 08:23:17.877685] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.344 [2024-06-11 08:23:17.886547] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.344 [2024-06-11 08:23:17.887075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.344 [2024-06-11 08:23:17.887405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.344 [2024-06-11 08:23:17.887419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.344 [2024-06-11 08:23:17.887428] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.344 [2024-06-11 08:23:17.887618] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.344 [2024-06-11 08:23:17.887765] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.344 [2024-06-11 08:23:17.887774] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.344 [2024-06-11 08:23:17.887782] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.344 [2024-06-11 08:23:17.890001] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.344 [2024-06-11 08:23:17.898680] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.344 [2024-06-11 08:23:17.899287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.344 [2024-06-11 08:23:17.899655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.344 [2024-06-11 08:23:17.899670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.344 [2024-06-11 08:23:17.899680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.344 [2024-06-11 08:23:17.899805] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.344 [2024-06-11 08:23:17.899971] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.344 [2024-06-11 08:23:17.899980] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.344 [2024-06-11 08:23:17.899987] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.344 [2024-06-11 08:23:17.902342] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.344 [2024-06-11 08:23:17.911153] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.344 [2024-06-11 08:23:17.911721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.344 [2024-06-11 08:23:17.912062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.344 [2024-06-11 08:23:17.912076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.344 [2024-06-11 08:23:17.912085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.344 [2024-06-11 08:23:17.912229] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.344 [2024-06-11 08:23:17.912339] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.344 [2024-06-11 08:23:17.912348] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.344 [2024-06-11 08:23:17.912355] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.344 [2024-06-11 08:23:17.914526] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.344 [2024-06-11 08:23:17.923676] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.344 [2024-06-11 08:23:17.924208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.344 [2024-06-11 08:23:17.924585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.344 [2024-06-11 08:23:17.924601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.344 [2024-06-11 08:23:17.924611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.344 [2024-06-11 08:23:17.924755] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.344 [2024-06-11 08:23:17.924920] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.344 [2024-06-11 08:23:17.924929] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.344 [2024-06-11 08:23:17.924937] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.344 [2024-06-11 08:23:17.927177] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.344 [2024-06-11 08:23:17.936097] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.344 [2024-06-11 08:23:17.936583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.344 [2024-06-11 08:23:17.937003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.344 [2024-06-11 08:23:17.937017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.344 [2024-06-11 08:23:17.937027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.344 [2024-06-11 08:23:17.937152] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.344 [2024-06-11 08:23:17.937298] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.344 [2024-06-11 08:23:17.937307] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.344 [2024-06-11 08:23:17.937315] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.344 [2024-06-11 08:23:17.939634] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.344 [2024-06-11 08:23:17.948839] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.344 [2024-06-11 08:23:17.949407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.344 [2024-06-11 08:23:17.949759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.344 [2024-06-11 08:23:17.949773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.344 [2024-06-11 08:23:17.949783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.344 [2024-06-11 08:23:17.949945] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.344 [2024-06-11 08:23:17.950073] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.344 [2024-06-11 08:23:17.950082] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.344 [2024-06-11 08:23:17.950090] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.344 [2024-06-11 08:23:17.952236] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.344 [2024-06-11 08:23:17.961346] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.344 [2024-06-11 08:23:17.961913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.344 [2024-06-11 08:23:17.962286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.345 [2024-06-11 08:23:17.962299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.345 [2024-06-11 08:23:17.962309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.345 [2024-06-11 08:23:17.962462] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.345 [2024-06-11 08:23:17.962647] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.345 [2024-06-11 08:23:17.962656] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.345 [2024-06-11 08:23:17.962664] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.345 [2024-06-11 08:23:17.964940] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.345 [2024-06-11 08:23:17.974044] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.345 [2024-06-11 08:23:17.974666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.345 [2024-06-11 08:23:17.974994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.345 [2024-06-11 08:23:17.975008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.345 [2024-06-11 08:23:17.975017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.345 [2024-06-11 08:23:17.975180] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.345 [2024-06-11 08:23:17.975345] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.345 [2024-06-11 08:23:17.975354] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.345 [2024-06-11 08:23:17.975362] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.345 [2024-06-11 08:23:17.977462] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.345 [2024-06-11 08:23:17.986566] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.607 [2024-06-11 08:23:17.987055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-06-11 08:23:17.987434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-06-11 08:23:17.987457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.607 [2024-06-11 08:23:17.987472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.607 [2024-06-11 08:23:17.987653] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.607 [2024-06-11 08:23:17.987781] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.607 [2024-06-11 08:23:17.987790] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.607 [2024-06-11 08:23:17.987798] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.607 [2024-06-11 08:23:17.990090] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.607 [2024-06-11 08:23:17.998951] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.607 [2024-06-11 08:23:17.999506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-06-11 08:23:17.999713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-06-11 08:23:17.999727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.607 [2024-06-11 08:23:17.999737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.607 [2024-06-11 08:23:17.999881] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.607 [2024-06-11 08:23:18.000102] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.607 [2024-06-11 08:23:18.000112] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.607 [2024-06-11 08:23:18.000120] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.607 [2024-06-11 08:23:18.002310] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.607 [2024-06-11 08:23:18.011266] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.607 [2024-06-11 08:23:18.011842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.607 [2024-06-11 08:23:18.012174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-06-11 08:23:18.012189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.608 [2024-06-11 08:23:18.012198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.608 [2024-06-11 08:23:18.012342] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.608 [2024-06-11 08:23:18.012459] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.608 [2024-06-11 08:23:18.012468] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.608 [2024-06-11 08:23:18.012476] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.608 [2024-06-11 08:23:18.014896] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.608 [2024-06-11 08:23:18.023775] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.608 [2024-06-11 08:23:18.024365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-06-11 08:23:18.024676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-06-11 08:23:18.024691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.608 [2024-06-11 08:23:18.024700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.608 [2024-06-11 08:23:18.024886] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.608 [2024-06-11 08:23:18.025089] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.608 [2024-06-11 08:23:18.025098] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.608 [2024-06-11 08:23:18.025107] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.608 [2024-06-11 08:23:18.027362] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.608 [2024-06-11 08:23:18.036172] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.608 [2024-06-11 08:23:18.036745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-06-11 08:23:18.037090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-06-11 08:23:18.037104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.608 [2024-06-11 08:23:18.037113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.608 [2024-06-11 08:23:18.037276] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.608 [2024-06-11 08:23:18.037469] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.608 [2024-06-11 08:23:18.037479] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.608 [2024-06-11 08:23:18.037487] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.608 [2024-06-11 08:23:18.039797] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.608 [2024-06-11 08:23:18.048546] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.608 [2024-06-11 08:23:18.049148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-06-11 08:23:18.049518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-06-11 08:23:18.049532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.608 [2024-06-11 08:23:18.049542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.608 [2024-06-11 08:23:18.049724] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.608 [2024-06-11 08:23:18.049852] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.608 [2024-06-11 08:23:18.049862] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.608 [2024-06-11 08:23:18.049870] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.608 [2024-06-11 08:23:18.052128] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.608 [2024-06-11 08:23:18.061040] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.608 [2024-06-11 08:23:18.061632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-06-11 08:23:18.061972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-06-11 08:23:18.061986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.608 [2024-06-11 08:23:18.061996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.608 [2024-06-11 08:23:18.062184] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.608 [2024-06-11 08:23:18.062349] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.608 [2024-06-11 08:23:18.062358] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.608 [2024-06-11 08:23:18.062366] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.608 [2024-06-11 08:23:18.064536] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.608 [2024-06-11 08:23:18.073460] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.608 [2024-06-11 08:23:18.074009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-06-11 08:23:18.074384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-06-11 08:23:18.074397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.608 [2024-06-11 08:23:18.074407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.608 [2024-06-11 08:23:18.074577] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.608 [2024-06-11 08:23:18.074689] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.608 [2024-06-11 08:23:18.074698] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.608 [2024-06-11 08:23:18.074706] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.608 [2024-06-11 08:23:18.077077] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.608 [2024-06-11 08:23:18.085837] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.608 [2024-06-11 08:23:18.086307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-06-11 08:23:18.086648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-06-11 08:23:18.086659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.608 [2024-06-11 08:23:18.086667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.608 [2024-06-11 08:23:18.086774] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.608 [2024-06-11 08:23:18.086936] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.608 [2024-06-11 08:23:18.086944] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.608 [2024-06-11 08:23:18.086951] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.608 [2024-06-11 08:23:18.089192] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.608 [2024-06-11 08:23:18.098221] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.608 [2024-06-11 08:23:18.098699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-06-11 08:23:18.099016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-06-11 08:23:18.099027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.608 [2024-06-11 08:23:18.099034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.608 [2024-06-11 08:23:18.099197] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.608 [2024-06-11 08:23:18.099345] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.608 [2024-06-11 08:23:18.099354] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.608 [2024-06-11 08:23:18.099361] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.608 [2024-06-11 08:23:18.101527] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.608 [2024-06-11 08:23:18.110553] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.608 [2024-06-11 08:23:18.111162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-06-11 08:23:18.111541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.608 [2024-06-11 08:23:18.111556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.608 [2024-06-11 08:23:18.111566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.608 [2024-06-11 08:23:18.111730] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.608 [2024-06-11 08:23:18.111895] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.608 [2024-06-11 08:23:18.111904] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.608 [2024-06-11 08:23:18.111912] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.608 [2024-06-11 08:23:18.114099] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.609 [2024-06-11 08:23:18.122965] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.609 [2024-06-11 08:23:18.123443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-06-11 08:23:18.123735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-06-11 08:23:18.123747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.609 [2024-06-11 08:23:18.123755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.609 [2024-06-11 08:23:18.123917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.609 [2024-06-11 08:23:18.124043] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.609 [2024-06-11 08:23:18.124051] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.609 [2024-06-11 08:23:18.124059] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.609 [2024-06-11 08:23:18.126256] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.609 [2024-06-11 08:23:18.135417] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.609 [2024-06-11 08:23:18.136008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-06-11 08:23:18.136348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-06-11 08:23:18.136362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.609 [2024-06-11 08:23:18.136371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.609 [2024-06-11 08:23:18.136579] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.609 [2024-06-11 08:23:18.136746] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.609 [2024-06-11 08:23:18.136754] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.609 [2024-06-11 08:23:18.136768] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.609 [2024-06-11 08:23:18.138969] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.609 [2024-06-11 08:23:18.147807] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.609 [2024-06-11 08:23:18.148260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-06-11 08:23:18.148705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-06-11 08:23:18.148742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.609 [2024-06-11 08:23:18.148753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.609 [2024-06-11 08:23:18.148952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.609 [2024-06-11 08:23:18.149100] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.609 [2024-06-11 08:23:18.149109] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.609 [2024-06-11 08:23:18.149117] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.609 [2024-06-11 08:23:18.151435] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.609 [2024-06-11 08:23:18.160200] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.609 [2024-06-11 08:23:18.160744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-06-11 08:23:18.161118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-06-11 08:23:18.161132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.609 [2024-06-11 08:23:18.161141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.609 [2024-06-11 08:23:18.161285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.609 [2024-06-11 08:23:18.161413] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.609 [2024-06-11 08:23:18.161422] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.609 [2024-06-11 08:23:18.161429] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.609 [2024-06-11 08:23:18.163620] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.609 [2024-06-11 08:23:18.172769] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.609 [2024-06-11 08:23:18.173376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-06-11 08:23:18.173755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-06-11 08:23:18.173771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.609 [2024-06-11 08:23:18.173780] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.609 [2024-06-11 08:23:18.173887] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.609 [2024-06-11 08:23:18.174034] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.609 [2024-06-11 08:23:18.174043] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.609 [2024-06-11 08:23:18.174054] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.609 [2024-06-11 08:23:18.176463] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.609 [2024-06-11 08:23:18.185373] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.609 [2024-06-11 08:23:18.185990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-06-11 08:23:18.186367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-06-11 08:23:18.186381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.609 [2024-06-11 08:23:18.186390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.609 [2024-06-11 08:23:18.186598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.609 [2024-06-11 08:23:18.186728] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.609 [2024-06-11 08:23:18.186738] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.609 [2024-06-11 08:23:18.186745] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.609 [2024-06-11 08:23:18.189201] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.609 [2024-06-11 08:23:18.197972] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.609 [2024-06-11 08:23:18.198554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-06-11 08:23:18.198891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-06-11 08:23:18.198904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.609 [2024-06-11 08:23:18.198914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.609 [2024-06-11 08:23:18.199095] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.609 [2024-06-11 08:23:18.199224] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.609 [2024-06-11 08:23:18.199233] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.609 [2024-06-11 08:23:18.199240] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.609 [2024-06-11 08:23:18.201468] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.609 [2024-06-11 08:23:18.210434] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.609 [2024-06-11 08:23:18.210882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-06-11 08:23:18.211213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-06-11 08:23:18.211225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.609 [2024-06-11 08:23:18.211233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.609 [2024-06-11 08:23:18.211358] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.609 [2024-06-11 08:23:18.211492] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.609 [2024-06-11 08:23:18.211501] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.609 [2024-06-11 08:23:18.211508] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.609 [2024-06-11 08:23:18.213852] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.609 [2024-06-11 08:23:18.222986] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.609 [2024-06-11 08:23:18.223412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-06-11 08:23:18.223653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.609 [2024-06-11 08:23:18.223666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.609 [2024-06-11 08:23:18.223675] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.609 [2024-06-11 08:23:18.223782] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.609 [2024-06-11 08:23:18.223925] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.609 [2024-06-11 08:23:18.223934] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.609 [2024-06-11 08:23:18.223941] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.609 [2024-06-11 08:23:18.226245] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.609 [2024-06-11 08:23:18.235387] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.610 [2024-06-11 08:23:18.235911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-06-11 08:23:18.236242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-06-11 08:23:18.236253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.610 [2024-06-11 08:23:18.236261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.610 [2024-06-11 08:23:18.236463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.610 [2024-06-11 08:23:18.236608] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.610 [2024-06-11 08:23:18.236616] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.610 [2024-06-11 08:23:18.236624] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.610 [2024-06-11 08:23:18.238853] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.610 [2024-06-11 08:23:18.247883] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.610 [2024-06-11 08:23:18.248335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-06-11 08:23:18.248650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.610 [2024-06-11 08:23:18.248662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.610 [2024-06-11 08:23:18.248670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.610 [2024-06-11 08:23:18.248775] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.610 [2024-06-11 08:23:18.248957] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.610 [2024-06-11 08:23:18.248966] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.610 [2024-06-11 08:23:18.248973] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.610 [2024-06-11 08:23:18.251187] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.872 [2024-06-11 08:23:18.260415] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.872 [2024-06-11 08:23:18.260990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.872 [2024-06-11 08:23:18.261326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.872 [2024-06-11 08:23:18.261340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.872 [2024-06-11 08:23:18.261349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.872 [2024-06-11 08:23:18.261521] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.872 [2024-06-11 08:23:18.261688] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.872 [2024-06-11 08:23:18.261697] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.872 [2024-06-11 08:23:18.261704] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.872 [2024-06-11 08:23:18.263831] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.872 [2024-06-11 08:23:18.272973] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.872 [2024-06-11 08:23:18.273484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.872 [2024-06-11 08:23:18.273812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.872 [2024-06-11 08:23:18.273826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.872 [2024-06-11 08:23:18.273835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.872 [2024-06-11 08:23:18.274016] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.872 [2024-06-11 08:23:18.274162] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.872 [2024-06-11 08:23:18.274171] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.872 [2024-06-11 08:23:18.274180] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.872 [2024-06-11 08:23:18.276517] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.872 [2024-06-11 08:23:18.285354] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.873 [2024-06-11 08:23:18.285924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.873 [2024-06-11 08:23:18.286251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.873 [2024-06-11 08:23:18.286265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.873 [2024-06-11 08:23:18.286274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.873 [2024-06-11 08:23:18.286446] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.873 [2024-06-11 08:23:18.286650] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.873 [2024-06-11 08:23:18.286659] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.873 [2024-06-11 08:23:18.286666] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.873 [2024-06-11 08:23:18.288796] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.873 [2024-06-11 08:23:18.297896] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.873 [2024-06-11 08:23:18.298529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.873 [2024-06-11 08:23:18.298907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.873 [2024-06-11 08:23:18.298921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.873 [2024-06-11 08:23:18.298931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.873 [2024-06-11 08:23:18.299057] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.873 [2024-06-11 08:23:18.299185] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.873 [2024-06-11 08:23:18.299194] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.873 [2024-06-11 08:23:18.299202] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.873 [2024-06-11 08:23:18.301486] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.873 [2024-06-11 08:23:18.310688] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.873 [2024-06-11 08:23:18.311170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.873 [2024-06-11 08:23:18.311485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.873 [2024-06-11 08:23:18.311497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.873 [2024-06-11 08:23:18.311505] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.873 [2024-06-11 08:23:18.311704] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.873 [2024-06-11 08:23:18.311883] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.873 [2024-06-11 08:23:18.311892] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.873 [2024-06-11 08:23:18.311898] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.873 [2024-06-11 08:23:18.314125] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.873 [2024-06-11 08:23:18.323216] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.873 [2024-06-11 08:23:18.323635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.873 [2024-06-11 08:23:18.323982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.873 [2024-06-11 08:23:18.323993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.873 [2024-06-11 08:23:18.324003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.873 [2024-06-11 08:23:18.324164] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.873 [2024-06-11 08:23:18.324308] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.873 [2024-06-11 08:23:18.324316] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.873 [2024-06-11 08:23:18.324323] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.873 [2024-06-11 08:23:18.326702] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.873 [2024-06-11 08:23:18.335772] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.873 [2024-06-11 08:23:18.336353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.873 [2024-06-11 08:23:18.336668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.873 [2024-06-11 08:23:18.336684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.873 [2024-06-11 08:23:18.336697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.873 [2024-06-11 08:23:18.336859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.873 [2024-06-11 08:23:18.337007] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.873 [2024-06-11 08:23:18.337016] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.873 [2024-06-11 08:23:18.337023] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.873 [2024-06-11 08:23:18.339281] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.873 [2024-06-11 08:23:18.348431] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.873 [2024-06-11 08:23:18.348913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.873 [2024-06-11 08:23:18.349246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.873 [2024-06-11 08:23:18.349256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.873 [2024-06-11 08:23:18.349264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.873 [2024-06-11 08:23:18.349407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.873 [2024-06-11 08:23:18.349522] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.873 [2024-06-11 08:23:18.349531] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.873 [2024-06-11 08:23:18.349538] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.873 [2024-06-11 08:23:18.351784] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.873 [2024-06-11 08:23:18.361051] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.873 [2024-06-11 08:23:18.361647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.873 [2024-06-11 08:23:18.362020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.873 [2024-06-11 08:23:18.362034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.873 [2024-06-11 08:23:18.362044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.873 [2024-06-11 08:23:18.362206] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.873 [2024-06-11 08:23:18.362316] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.873 [2024-06-11 08:23:18.362325] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.873 [2024-06-11 08:23:18.362332] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.873 [2024-06-11 08:23:18.364630] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.873 [2024-06-11 08:23:18.373848] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.873 [2024-06-11 08:23:18.374345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.873 [2024-06-11 08:23:18.374655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.873 [2024-06-11 08:23:18.374670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.873 [2024-06-11 08:23:18.374684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.873 [2024-06-11 08:23:18.374846] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.873 [2024-06-11 08:23:18.374994] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.873 [2024-06-11 08:23:18.375003] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.873 [2024-06-11 08:23:18.375010] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.873 [2024-06-11 08:23:18.377305] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.873 [2024-06-11 08:23:18.386358] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.873 [2024-06-11 08:23:18.386930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.873 [2024-06-11 08:23:18.387265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.873 [2024-06-11 08:23:18.387279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.873 [2024-06-11 08:23:18.387288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.873 [2024-06-11 08:23:18.387479] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.873 [2024-06-11 08:23:18.387608] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.873 [2024-06-11 08:23:18.387617] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.873 [2024-06-11 08:23:18.387625] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.873 [2024-06-11 08:23:18.390225] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.873 [2024-06-11 08:23:18.398744] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.873 [2024-06-11 08:23:18.399303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.873 [2024-06-11 08:23:18.399624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.873 [2024-06-11 08:23:18.399640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.873 [2024-06-11 08:23:18.399650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.874 [2024-06-11 08:23:18.399793] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.874 [2024-06-11 08:23:18.399959] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.874 [2024-06-11 08:23:18.399968] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.874 [2024-06-11 08:23:18.399976] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.874 [2024-06-11 08:23:18.402363] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.874 [2024-06-11 08:23:18.411220] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.874 [2024-06-11 08:23:18.411675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.874 [2024-06-11 08:23:18.411974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.874 [2024-06-11 08:23:18.411986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.874 [2024-06-11 08:23:18.411994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.874 [2024-06-11 08:23:18.412125] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.874 [2024-06-11 08:23:18.412251] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.874 [2024-06-11 08:23:18.412261] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.874 [2024-06-11 08:23:18.412268] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.874 [2024-06-11 08:23:18.414743] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.874 [2024-06-11 08:23:18.423617] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.874 [2024-06-11 08:23:18.424103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.874 [2024-06-11 08:23:18.424451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.874 [2024-06-11 08:23:18.424463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.874 [2024-06-11 08:23:18.424471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.874 [2024-06-11 08:23:18.424597] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.874 [2024-06-11 08:23:18.424777] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.874 [2024-06-11 08:23:18.424787] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.874 [2024-06-11 08:23:18.424795] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.874 [2024-06-11 08:23:18.426975] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.874 [2024-06-11 08:23:18.436058] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.874 [2024-06-11 08:23:18.436435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.874 [2024-06-11 08:23:18.436780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.874 [2024-06-11 08:23:18.436791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.874 [2024-06-11 08:23:18.436799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.874 [2024-06-11 08:23:18.436924] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.874 [2024-06-11 08:23:18.437012] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.874 [2024-06-11 08:23:18.437020] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.874 [2024-06-11 08:23:18.437027] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.874 [2024-06-11 08:23:18.439426] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.874 [2024-06-11 08:23:18.448805] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.874 [2024-06-11 08:23:18.449422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.874 [2024-06-11 08:23:18.449769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.874 [2024-06-11 08:23:18.449783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.874 [2024-06-11 08:23:18.449792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.874 [2024-06-11 08:23:18.449955] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.874 [2024-06-11 08:23:18.450087] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.874 [2024-06-11 08:23:18.450097] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.874 [2024-06-11 08:23:18.450105] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.874 [2024-06-11 08:23:18.452270] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.874 [2024-06-11 08:23:18.461507] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.874 [2024-06-11 08:23:18.462110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.874 [2024-06-11 08:23:18.462448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.874 [2024-06-11 08:23:18.462463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.874 [2024-06-11 08:23:18.462473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.874 [2024-06-11 08:23:18.462635] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.874 [2024-06-11 08:23:18.462801] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.874 [2024-06-11 08:23:18.462809] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.874 [2024-06-11 08:23:18.462817] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.874 [2024-06-11 08:23:18.465111] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.874 [2024-06-11 08:23:18.474007] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.874 [2024-06-11 08:23:18.474545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.874 [2024-06-11 08:23:18.474883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.874 [2024-06-11 08:23:18.474894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.874 [2024-06-11 08:23:18.474902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.874 [2024-06-11 08:23:18.475027] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.874 [2024-06-11 08:23:18.475188] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.874 [2024-06-11 08:23:18.475196] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.874 [2024-06-11 08:23:18.475203] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.874 [2024-06-11 08:23:18.477418] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.874 [2024-06-11 08:23:18.486537] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.874 [2024-06-11 08:23:18.487018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.874 [2024-06-11 08:23:18.487361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.874 [2024-06-11 08:23:18.487372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.874 [2024-06-11 08:23:18.487379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.874 [2024-06-11 08:23:18.487513] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.874 [2024-06-11 08:23:18.487658] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.874 [2024-06-11 08:23:18.487670] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.874 [2024-06-11 08:23:18.487677] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.874 [2024-06-11 08:23:18.489981] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.874 [2024-06-11 08:23:18.499061] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.874 [2024-06-11 08:23:18.499646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.874 [2024-06-11 08:23:18.499982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.874 [2024-06-11 08:23:18.499996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.874 [2024-06-11 08:23:18.500005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.874 [2024-06-11 08:23:18.500112] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.874 [2024-06-11 08:23:18.500258] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.874 [2024-06-11 08:23:18.500267] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.874 [2024-06-11 08:23:18.500274] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.874 [2024-06-11 08:23:18.502541] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:47.874 [2024-06-11 08:23:18.511614] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:47.874 [2024-06-11 08:23:18.512183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.874 [2024-06-11 08:23:18.512516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:47.874 [2024-06-11 08:23:18.512532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:47.874 [2024-06-11 08:23:18.512541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:47.874 [2024-06-11 08:23:18.512667] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:47.874 [2024-06-11 08:23:18.512814] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:47.874 [2024-06-11 08:23:18.512824] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:47.875 [2024-06-11 08:23:18.512832] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:47.875 [2024-06-11 08:23:18.515145] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.136 [2024-06-11 08:23:18.524037] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.136 [2024-06-11 08:23:18.524540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.136 [2024-06-11 08:23:18.524878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.136 [2024-06-11 08:23:18.524890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.136 [2024-06-11 08:23:18.524898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.136 [2024-06-11 08:23:18.525042] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.136 [2024-06-11 08:23:18.525186] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.137 [2024-06-11 08:23:18.525194] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.137 [2024-06-11 08:23:18.525206] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.137 [2024-06-11 08:23:18.527400] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.137 [2024-06-11 08:23:18.536490] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.137 [2024-06-11 08:23:18.537066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.137 [2024-06-11 08:23:18.537452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.137 [2024-06-11 08:23:18.537467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.137 [2024-06-11 08:23:18.537477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.137 [2024-06-11 08:23:18.537657] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.137 [2024-06-11 08:23:18.537786] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.137 [2024-06-11 08:23:18.537795] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.137 [2024-06-11 08:23:18.537802] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.137 [2024-06-11 08:23:18.539783] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.137 [2024-06-11 08:23:18.548969] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.137 [2024-06-11 08:23:18.549583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.137 [2024-06-11 08:23:18.549909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.137 [2024-06-11 08:23:18.549923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.137 [2024-06-11 08:23:18.549933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.137 [2024-06-11 08:23:18.550132] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.137 [2024-06-11 08:23:18.550259] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.137 [2024-06-11 08:23:18.550268] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.137 [2024-06-11 08:23:18.550276] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.137 [2024-06-11 08:23:18.552564] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.137 [2024-06-11 08:23:18.561492] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.137 [2024-06-11 08:23:18.562085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.137 [2024-06-11 08:23:18.562465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.137 [2024-06-11 08:23:18.562480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.137 [2024-06-11 08:23:18.562489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.137 [2024-06-11 08:23:18.562651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.137 [2024-06-11 08:23:18.562798] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.137 [2024-06-11 08:23:18.562807] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.137 [2024-06-11 08:23:18.562815] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.137 [2024-06-11 08:23:18.565112] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.137 [2024-06-11 08:23:18.573974] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.137 [2024-06-11 08:23:18.574530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.137 [2024-06-11 08:23:18.574880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.137 [2024-06-11 08:23:18.574894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.137 [2024-06-11 08:23:18.574903] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.137 [2024-06-11 08:23:18.575084] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.137 [2024-06-11 08:23:18.575231] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.137 [2024-06-11 08:23:18.575240] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.137 [2024-06-11 08:23:18.575248] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.137 [2024-06-11 08:23:18.577346] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.137 [2024-06-11 08:23:18.586085] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.137 [2024-06-11 08:23:18.586670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.137 [2024-06-11 08:23:18.587014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.137 [2024-06-11 08:23:18.587027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.137 [2024-06-11 08:23:18.587037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.137 [2024-06-11 08:23:18.587218] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.137 [2024-06-11 08:23:18.587402] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.137 [2024-06-11 08:23:18.587411] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.137 [2024-06-11 08:23:18.587418] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.137 [2024-06-11 08:23:18.589517] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.137 [2024-06-11 08:23:18.598251] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.137 [2024-06-11 08:23:18.598801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.137 [2024-06-11 08:23:18.599045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.137 [2024-06-11 08:23:18.599060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.137 [2024-06-11 08:23:18.599069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.137 [2024-06-11 08:23:18.599232] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.137 [2024-06-11 08:23:18.599362] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.137 [2024-06-11 08:23:18.599371] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.137 [2024-06-11 08:23:18.599379] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.137 [2024-06-11 08:23:18.601602] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.137 [2024-06-11 08:23:18.610754] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.137 [2024-06-11 08:23:18.611240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.137 [2024-06-11 08:23:18.611453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.137 [2024-06-11 08:23:18.611466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.137 [2024-06-11 08:23:18.611474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.137 [2024-06-11 08:23:18.611619] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.137 [2024-06-11 08:23:18.611763] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.137 [2024-06-11 08:23:18.611772] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.137 [2024-06-11 08:23:18.611779] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.137 [2024-06-11 08:23:18.614103] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.137 [2024-06-11 08:23:18.623116] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.137 [2024-06-11 08:23:18.623899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.137 [2024-06-11 08:23:18.624259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.137 [2024-06-11 08:23:18.624274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.137 [2024-06-11 08:23:18.624283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.137 [2024-06-11 08:23:18.624408] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.137 [2024-06-11 08:23:18.624583] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.137 [2024-06-11 08:23:18.624593] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.137 [2024-06-11 08:23:18.624602] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.137 [2024-06-11 08:23:18.626898] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.137 [2024-06-11 08:23:18.635458] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.137 [2024-06-11 08:23:18.636068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.137 [2024-06-11 08:23:18.636451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.137 [2024-06-11 08:23:18.636466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.137 [2024-06-11 08:23:18.636476] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.137 [2024-06-11 08:23:18.636674] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.137 [2024-06-11 08:23:18.636840] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.137 [2024-06-11 08:23:18.636850] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.137 [2024-06-11 08:23:18.636858] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.137 [2024-06-11 08:23:18.639135] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.138 [2024-06-11 08:23:18.647896] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.138 [2024-06-11 08:23:18.648421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.138 [2024-06-11 08:23:18.648834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.138 [2024-06-11 08:23:18.648873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.138 [2024-06-11 08:23:18.648885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.138 [2024-06-11 08:23:18.649031] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.138 [2024-06-11 08:23:18.649196] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.138 [2024-06-11 08:23:18.649206] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.138 [2024-06-11 08:23:18.649214] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.138 [2024-06-11 08:23:18.651515] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.138 [2024-06-11 08:23:18.660435] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.138 [2024-06-11 08:23:18.660947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.138 [2024-06-11 08:23:18.661280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.138 [2024-06-11 08:23:18.661292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.138 [2024-06-11 08:23:18.661300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.138 [2024-06-11 08:23:18.661449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.138 [2024-06-11 08:23:18.661593] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.138 [2024-06-11 08:23:18.661602] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.138 [2024-06-11 08:23:18.661609] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.138 [2024-06-11 08:23:18.663820] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.138 [2024-06-11 08:23:18.672717] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.138 [2024-06-11 08:23:18.673168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.138 [2024-06-11 08:23:18.673641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.138 [2024-06-11 08:23:18.673679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.138 [2024-06-11 08:23:18.673690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.138 [2024-06-11 08:23:18.673873] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.138 [2024-06-11 08:23:18.674001] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.138 [2024-06-11 08:23:18.674011] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.138 [2024-06-11 08:23:18.674020] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.138 [2024-06-11 08:23:18.676152] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.138 [2024-06-11 08:23:18.685080] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.138 [2024-06-11 08:23:18.685557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.138 [2024-06-11 08:23:18.685898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.138 [2024-06-11 08:23:18.685915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.138 [2024-06-11 08:23:18.685923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.138 [2024-06-11 08:23:18.686030] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.138 [2024-06-11 08:23:18.686211] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.138 [2024-06-11 08:23:18.686220] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.138 [2024-06-11 08:23:18.686227] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.138 [2024-06-11 08:23:18.688506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.138 [2024-06-11 08:23:18.697531] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.138 [2024-06-11 08:23:18.698060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.138 [2024-06-11 08:23:18.698397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.138 [2024-06-11 08:23:18.698408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.138 [2024-06-11 08:23:18.698416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.138 [2024-06-11 08:23:18.698564] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.138 [2024-06-11 08:23:18.698672] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.138 [2024-06-11 08:23:18.698681] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.138 [2024-06-11 08:23:18.698687] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.138 [2024-06-11 08:23:18.700864] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.138 [2024-06-11 08:23:18.710136] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.138 [2024-06-11 08:23:18.710610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.138 [2024-06-11 08:23:18.710908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.138 [2024-06-11 08:23:18.710919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.138 [2024-06-11 08:23:18.710927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.138 [2024-06-11 08:23:18.711070] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.138 [2024-06-11 08:23:18.711289] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.138 [2024-06-11 08:23:18.711297] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.138 [2024-06-11 08:23:18.711304] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.138 [2024-06-11 08:23:18.713641] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.138 [2024-06-11 08:23:18.722500] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.138 [2024-06-11 08:23:18.722987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.138 [2024-06-11 08:23:18.723178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.138 [2024-06-11 08:23:18.723188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.138 [2024-06-11 08:23:18.723200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.138 [2024-06-11 08:23:18.723361] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.138 [2024-06-11 08:23:18.723494] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.138 [2024-06-11 08:23:18.723503] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.138 [2024-06-11 08:23:18.723510] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.138 [2024-06-11 08:23:18.725522] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.138 [2024-06-11 08:23:18.735004] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.138 [2024-06-11 08:23:18.735543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.138 [2024-06-11 08:23:18.735770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.138 [2024-06-11 08:23:18.735785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.138 [2024-06-11 08:23:18.735794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.138 [2024-06-11 08:23:18.735976] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.138 [2024-06-11 08:23:18.736123] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.138 [2024-06-11 08:23:18.736132] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.138 [2024-06-11 08:23:18.736140] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.138 [2024-06-11 08:23:18.738425] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.138 [2024-06-11 08:23:18.747218] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.138 [2024-06-11 08:23:18.747820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.138 [2024-06-11 08:23:18.748171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.138 [2024-06-11 08:23:18.748185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.138 [2024-06-11 08:23:18.748194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.138 [2024-06-11 08:23:18.748356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.138 [2024-06-11 08:23:18.748518] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.138 [2024-06-11 08:23:18.748528] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.138 [2024-06-11 08:23:18.748536] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.138 [2024-06-11 08:23:18.750813] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.138 [2024-06-11 08:23:18.759534] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.138 [2024-06-11 08:23:18.760007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.138 [2024-06-11 08:23:18.760341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.138 [2024-06-11 08:23:18.760352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.138 [2024-06-11 08:23:18.760361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.139 [2024-06-11 08:23:18.760515] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.139 [2024-06-11 08:23:18.760642] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.139 [2024-06-11 08:23:18.760652] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.139 [2024-06-11 08:23:18.760659] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.139 [2024-06-11 08:23:18.762857] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.139 [2024-06-11 08:23:18.772065] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.139 [2024-06-11 08:23:18.772577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.139 [2024-06-11 08:23:18.772904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.139 [2024-06-11 08:23:18.772915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.139 [2024-06-11 08:23:18.772923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.139 [2024-06-11 08:23:18.773066] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.139 [2024-06-11 08:23:18.773228] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.139 [2024-06-11 08:23:18.773237] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.139 [2024-06-11 08:23:18.773244] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.139 [2024-06-11 08:23:18.775593] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.402 [2024-06-11 08:23:18.784608] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.402 [2024-06-11 08:23:18.784972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.402 [2024-06-11 08:23:18.785258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.402 [2024-06-11 08:23:18.785269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.402 [2024-06-11 08:23:18.785276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.402 [2024-06-11 08:23:18.785444] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.402 [2024-06-11 08:23:18.785551] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.402 [2024-06-11 08:23:18.785559] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.402 [2024-06-11 08:23:18.785567] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.402 [2024-06-11 08:23:18.787909] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.402 [2024-06-11 08:23:18.797201] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.402 [2024-06-11 08:23:18.797704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.402 [2024-06-11 08:23:18.798006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.402 [2024-06-11 08:23:18.798017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.402 [2024-06-11 08:23:18.798024] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.402 [2024-06-11 08:23:18.798187] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.402 [2024-06-11 08:23:18.798353] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.402 [2024-06-11 08:23:18.798362] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.402 [2024-06-11 08:23:18.798370] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.402 [2024-06-11 08:23:18.800759] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.402 [2024-06-11 08:23:18.809608] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.402 [2024-06-11 08:23:18.810076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.402 [2024-06-11 08:23:18.810405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.402 [2024-06-11 08:23:18.810416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.402 [2024-06-11 08:23:18.810423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.402 [2024-06-11 08:23:18.810552] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.402 [2024-06-11 08:23:18.810678] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.402 [2024-06-11 08:23:18.810687] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.402 [2024-06-11 08:23:18.810693] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.402 [2024-06-11 08:23:18.812892] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.402 [2024-06-11 08:23:18.821906] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.402 [2024-06-11 08:23:18.822410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.402 [2024-06-11 08:23:18.822632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.402 [2024-06-11 08:23:18.822644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.402 [2024-06-11 08:23:18.822652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.402 [2024-06-11 08:23:18.822795] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.402 [2024-06-11 08:23:18.822957] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.402 [2024-06-11 08:23:18.822966] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.402 [2024-06-11 08:23:18.822973] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.402 [2024-06-11 08:23:18.825351] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.402 [2024-06-11 08:23:18.834429] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.402 [2024-06-11 08:23:18.834960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.402 [2024-06-11 08:23:18.835249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.402 [2024-06-11 08:23:18.835260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.402 [2024-06-11 08:23:18.835267] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.402 [2024-06-11 08:23:18.835428] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.402 [2024-06-11 08:23:18.835614] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.402 [2024-06-11 08:23:18.835627] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.402 [2024-06-11 08:23:18.835635] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.402 [2024-06-11 08:23:18.837995] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.402 [2024-06-11 08:23:18.846826] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.402 [2024-06-11 08:23:18.847311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.402 [2024-06-11 08:23:18.847504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.402 [2024-06-11 08:23:18.847523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.402 [2024-06-11 08:23:18.847531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.402 [2024-06-11 08:23:18.847675] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.402 [2024-06-11 08:23:18.847855] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.402 [2024-06-11 08:23:18.847865] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.402 [2024-06-11 08:23:18.847871] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.403 [2024-06-11 08:23:18.850082] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.403 [2024-06-11 08:23:18.859408] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.403 [2024-06-11 08:23:18.859826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.403 [2024-06-11 08:23:18.860154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.403 [2024-06-11 08:23:18.860165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.403 [2024-06-11 08:23:18.860173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.403 [2024-06-11 08:23:18.860315] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.403 [2024-06-11 08:23:18.860464] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.403 [2024-06-11 08:23:18.860473] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.403 [2024-06-11 08:23:18.860480] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.403 [2024-06-11 08:23:18.862767] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.403 [2024-06-11 08:23:18.871900] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.403 [2024-06-11 08:23:18.872383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.403 [2024-06-11 08:23:18.872578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.403 [2024-06-11 08:23:18.872592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.403 [2024-06-11 08:23:18.872599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.403 [2024-06-11 08:23:18.872779] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.403 [2024-06-11 08:23:18.872943] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.403 [2024-06-11 08:23:18.872952] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.403 [2024-06-11 08:23:18.872963] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.403 [2024-06-11 08:23:18.874978] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.403 [2024-06-11 08:23:18.884460] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.403 [2024-06-11 08:23:18.884925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.403 [2024-06-11 08:23:18.885264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.403 [2024-06-11 08:23:18.885275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.403 [2024-06-11 08:23:18.885283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.403 [2024-06-11 08:23:18.885486] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.403 [2024-06-11 08:23:18.885630] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.403 [2024-06-11 08:23:18.885639] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.403 [2024-06-11 08:23:18.885646] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.403 [2024-06-11 08:23:18.887857] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.403 [2024-06-11 08:23:18.896858] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.403 [2024-06-11 08:23:18.897242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.403 [2024-06-11 08:23:18.897549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.403 [2024-06-11 08:23:18.897560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.403 [2024-06-11 08:23:18.897568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.403 [2024-06-11 08:23:18.897694] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.403 [2024-06-11 08:23:18.897781] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.403 [2024-06-11 08:23:18.897789] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.403 [2024-06-11 08:23:18.897796] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.403 [2024-06-11 08:23:18.900232] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.403 [2024-06-11 08:23:18.909067] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.403 [2024-06-11 08:23:18.909642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.403 [2024-06-11 08:23:18.910022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.403 [2024-06-11 08:23:18.910036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.403 [2024-06-11 08:23:18.910045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.403 [2024-06-11 08:23:18.910152] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.403 [2024-06-11 08:23:18.910336] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.403 [2024-06-11 08:23:18.910345] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.403 [2024-06-11 08:23:18.910353] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.403 [2024-06-11 08:23:18.912624] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.403 [2024-06-11 08:23:18.921482] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.403 [2024-06-11 08:23:18.921984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.403 [2024-06-11 08:23:18.922261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.403 [2024-06-11 08:23:18.922272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.403 [2024-06-11 08:23:18.922280] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.403 [2024-06-11 08:23:18.922368] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.403 [2024-06-11 08:23:18.922555] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.403 [2024-06-11 08:23:18.922564] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.403 [2024-06-11 08:23:18.922571] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.403 [2024-06-11 08:23:18.924824] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.403 [2024-06-11 08:23:18.934031] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.403 [2024-06-11 08:23:18.934450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.403 [2024-06-11 08:23:18.934756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.403 [2024-06-11 08:23:18.934766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.403 [2024-06-11 08:23:18.934774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.403 [2024-06-11 08:23:18.934954] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.403 [2024-06-11 08:23:18.935135] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.403 [2024-06-11 08:23:18.935143] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.403 [2024-06-11 08:23:18.935150] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.403 [2024-06-11 08:23:18.937256] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.403 [2024-06-11 08:23:18.946599] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.403 [2024-06-11 08:23:18.947063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.403 [2024-06-11 08:23:18.947403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.403 [2024-06-11 08:23:18.947415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.403 [2024-06-11 08:23:18.947422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.403 [2024-06-11 08:23:18.947552] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.403 [2024-06-11 08:23:18.947733] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.403 [2024-06-11 08:23:18.947742] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.403 [2024-06-11 08:23:18.947749] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.403 [2024-06-11 08:23:18.949864] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.403 [2024-06-11 08:23:18.959094] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.403 [2024-06-11 08:23:18.959590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.403 [2024-06-11 08:23:18.959917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.403 [2024-06-11 08:23:18.959928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.403 [2024-06-11 08:23:18.959936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.403 [2024-06-11 08:23:18.960079] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.403 [2024-06-11 08:23:18.960242] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.403 [2024-06-11 08:23:18.960250] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.403 [2024-06-11 08:23:18.960258] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.403 [2024-06-11 08:23:18.962679] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.403 [2024-06-11 08:23:18.971489] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.403 [2024-06-11 08:23:18.971936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.403 [2024-06-11 08:23:18.972229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.404 [2024-06-11 08:23:18.972240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.404 [2024-06-11 08:23:18.972248] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.404 [2024-06-11 08:23:18.972390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.404 [2024-06-11 08:23:18.972522] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.404 [2024-06-11 08:23:18.972530] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.404 [2024-06-11 08:23:18.972537] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.404 [2024-06-11 08:23:18.974642] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.404 [2024-06-11 08:23:18.984015] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.404 [2024-06-11 08:23:18.984444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.404 [2024-06-11 08:23:18.984741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.404 [2024-06-11 08:23:18.984752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.404 [2024-06-11 08:23:18.984760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.404 [2024-06-11 08:23:18.984921] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.404 [2024-06-11 08:23:18.985064] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.404 [2024-06-11 08:23:18.985073] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.404 [2024-06-11 08:23:18.985080] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.404 [2024-06-11 08:23:18.987351] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.404 [2024-06-11 08:23:18.996459] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.404 [2024-06-11 08:23:18.996931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.404 [2024-06-11 08:23:18.997269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.404 [2024-06-11 08:23:18.997280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.404 [2024-06-11 08:23:18.997287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.404 [2024-06-11 08:23:18.997430] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.404 [2024-06-11 08:23:18.997635] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.404 [2024-06-11 08:23:18.997644] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.404 [2024-06-11 08:23:18.997651] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.404 [2024-06-11 08:23:18.999717] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.404 [2024-06-11 08:23:19.009145] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.404 [2024-06-11 08:23:19.009612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.404 [2024-06-11 08:23:19.009920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.404 [2024-06-11 08:23:19.009931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.404 [2024-06-11 08:23:19.009938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.404 [2024-06-11 08:23:19.010100] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.404 [2024-06-11 08:23:19.010244] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.404 [2024-06-11 08:23:19.010252] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.404 [2024-06-11 08:23:19.010259] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.404 [2024-06-11 08:23:19.012455] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.404 [2024-06-11 08:23:19.021472] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.404 [2024-06-11 08:23:19.022042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.404 [2024-06-11 08:23:19.022420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.404 [2024-06-11 08:23:19.022434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.404 [2024-06-11 08:23:19.022452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.404 [2024-06-11 08:23:19.022615] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.404 [2024-06-11 08:23:19.022763] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.404 [2024-06-11 08:23:19.022772] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.404 [2024-06-11 08:23:19.022780] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.404 [2024-06-11 08:23:19.025113] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.404 [2024-06-11 08:23:19.033813] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.404 [2024-06-11 08:23:19.034284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.404 [2024-06-11 08:23:19.034711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.404 [2024-06-11 08:23:19.034754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.404 [2024-06-11 08:23:19.034765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.404 [2024-06-11 08:23:19.034946] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.404 [2024-06-11 08:23:19.035113] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.404 [2024-06-11 08:23:19.035122] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.404 [2024-06-11 08:23:19.035129] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.404 [2024-06-11 08:23:19.037581] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.404 [2024-06-11 08:23:19.046264] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.671 [2024-06-11 08:23:19.046886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.671 [2024-06-11 08:23:19.047221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.671 [2024-06-11 08:23:19.047235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.671 [2024-06-11 08:23:19.047245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.671 [2024-06-11 08:23:19.047408] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.671 [2024-06-11 08:23:19.047578] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.671 [2024-06-11 08:23:19.047588] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.671 [2024-06-11 08:23:19.047596] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.671 [2024-06-11 08:23:19.049973] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.671 [2024-06-11 08:23:19.058805] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.671 [2024-06-11 08:23:19.059267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.671 [2024-06-11 08:23:19.059690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.671 [2024-06-11 08:23:19.059729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.671 [2024-06-11 08:23:19.059741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.671 [2024-06-11 08:23:19.059888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.671 [2024-06-11 08:23:19.060017] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.671 [2024-06-11 08:23:19.060026] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.671 [2024-06-11 08:23:19.060034] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.671 [2024-06-11 08:23:19.062449] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.671 [2024-06-11 08:23:19.071169] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.671 [2024-06-11 08:23:19.071759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.671 [2024-06-11 08:23:19.072009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.671 [2024-06-11 08:23:19.072023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.671 [2024-06-11 08:23:19.072037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.671 [2024-06-11 08:23:19.072163] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.671 [2024-06-11 08:23:19.072291] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.671 [2024-06-11 08:23:19.072301] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.671 [2024-06-11 08:23:19.072308] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.671 [2024-06-11 08:23:19.074557] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.671 [2024-06-11 08:23:19.083599] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.671 [2024-06-11 08:23:19.084129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.671 [2024-06-11 08:23:19.084465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.671 [2024-06-11 08:23:19.084478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.671 [2024-06-11 08:23:19.084486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.671 [2024-06-11 08:23:19.084648] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.671 [2024-06-11 08:23:19.084755] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.671 [2024-06-11 08:23:19.084764] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.671 [2024-06-11 08:23:19.084771] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.671 [2024-06-11 08:23:19.087116] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.671 [2024-06-11 08:23:19.096184] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.671 [2024-06-11 08:23:19.096659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.671 [2024-06-11 08:23:19.096984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.671 [2024-06-11 08:23:19.096995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.671 [2024-06-11 08:23:19.097003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.671 [2024-06-11 08:23:19.097128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.671 [2024-06-11 08:23:19.097254] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.671 [2024-06-11 08:23:19.097262] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.671 [2024-06-11 08:23:19.097269] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.671 [2024-06-11 08:23:19.099585] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.671 [2024-06-11 08:23:19.108737] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.671 [2024-06-11 08:23:19.109337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.671 [2024-06-11 08:23:19.109682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.671 [2024-06-11 08:23:19.109698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.671 [2024-06-11 08:23:19.109707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.671 [2024-06-11 08:23:19.109893] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.671 [2024-06-11 08:23:19.110077] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.671 [2024-06-11 08:23:19.110087] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.671 [2024-06-11 08:23:19.110095] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.671 [2024-06-11 08:23:19.112521] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.671 [2024-06-11 08:23:19.121202] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.671 [2024-06-11 08:23:19.121633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.671 [2024-06-11 08:23:19.121954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.671 [2024-06-11 08:23:19.121967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.671 [2024-06-11 08:23:19.121977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.671 [2024-06-11 08:23:19.122121] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.671 [2024-06-11 08:23:19.122268] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.671 [2024-06-11 08:23:19.122277] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.671 [2024-06-11 08:23:19.122285] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.671 [2024-06-11 08:23:19.124549] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.671 [2024-06-11 08:23:19.133718] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.671 [2024-06-11 08:23:19.134251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.671 [2024-06-11 08:23:19.134488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.671 [2024-06-11 08:23:19.134500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.671 [2024-06-11 08:23:19.134508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.671 [2024-06-11 08:23:19.134652] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.671 [2024-06-11 08:23:19.134742] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.671 [2024-06-11 08:23:19.134751] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.671 [2024-06-11 08:23:19.134758] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.671 [2024-06-11 08:23:19.136954] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.671 [2024-06-11 08:23:19.146067] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.672 [2024-06-11 08:23:19.146634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.672 [2024-06-11 08:23:19.146886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.672 [2024-06-11 08:23:19.146900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.672 [2024-06-11 08:23:19.146909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.672 [2024-06-11 08:23:19.147071] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.672 [2024-06-11 08:23:19.147168] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.672 [2024-06-11 08:23:19.147178] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.672 [2024-06-11 08:23:19.147186] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.672 [2024-06-11 08:23:19.149533] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.672 [2024-06-11 08:23:19.158548] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.672 [2024-06-11 08:23:19.159133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.672 [2024-06-11 08:23:19.159466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.672 [2024-06-11 08:23:19.159489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.672 [2024-06-11 08:23:19.159499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.672 [2024-06-11 08:23:19.159699] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.672 [2024-06-11 08:23:19.159845] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.672 [2024-06-11 08:23:19.159855] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.672 [2024-06-11 08:23:19.159862] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.672 [2024-06-11 08:23:19.162108] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.672 [2024-06-11 08:23:19.171175] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.672 [2024-06-11 08:23:19.171659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.672 [2024-06-11 08:23:19.171994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.672 [2024-06-11 08:23:19.172005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.672 [2024-06-11 08:23:19.172013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.672 [2024-06-11 08:23:19.172192] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.672 [2024-06-11 08:23:19.172335] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.672 [2024-06-11 08:23:19.172343] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.672 [2024-06-11 08:23:19.172351] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.672 [2024-06-11 08:23:19.174518] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.672 [2024-06-11 08:23:19.183755] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.672 [2024-06-11 08:23:19.184202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.672 [2024-06-11 08:23:19.184506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.672 [2024-06-11 08:23:19.184517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.672 [2024-06-11 08:23:19.184525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.672 [2024-06-11 08:23:19.184668] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.672 [2024-06-11 08:23:19.184848] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.672 [2024-06-11 08:23:19.184860] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.672 [2024-06-11 08:23:19.184868] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.672 [2024-06-11 08:23:19.187121] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.672 [2024-06-11 08:23:19.196301] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.672 [2024-06-11 08:23:19.196816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.672 [2024-06-11 08:23:19.197155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.672 [2024-06-11 08:23:19.197166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.672 [2024-06-11 08:23:19.197173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.672 [2024-06-11 08:23:19.197354] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.672 [2024-06-11 08:23:19.197522] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.672 [2024-06-11 08:23:19.197531] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.672 [2024-06-11 08:23:19.197538] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.672 [2024-06-11 08:23:19.199730] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.672 [2024-06-11 08:23:19.208837] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.672 [2024-06-11 08:23:19.209275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.672 [2024-06-11 08:23:19.209622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.672 [2024-06-11 08:23:19.209634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.672 [2024-06-11 08:23:19.209642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.672 [2024-06-11 08:23:19.209767] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.672 [2024-06-11 08:23:19.209893] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.672 [2024-06-11 08:23:19.209901] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.672 [2024-06-11 08:23:19.209907] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.672 [2024-06-11 08:23:19.212289] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.672 [2024-06-11 08:23:19.221369] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.672 [2024-06-11 08:23:19.221841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.672 [2024-06-11 08:23:19.222156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.672 [2024-06-11 08:23:19.222166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.672 [2024-06-11 08:23:19.222174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.672 [2024-06-11 08:23:19.222317] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.672 [2024-06-11 08:23:19.222466] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.672 [2024-06-11 08:23:19.222475] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.672 [2024-06-11 08:23:19.222486] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.672 [2024-06-11 08:23:19.224899] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.672 [2024-06-11 08:23:19.233755] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.672 [2024-06-11 08:23:19.234081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.672 [2024-06-11 08:23:19.234432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.672 [2024-06-11 08:23:19.234452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.672 [2024-06-11 08:23:19.234460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.672 [2024-06-11 08:23:19.234568] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.672 [2024-06-11 08:23:19.234733] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.672 [2024-06-11 08:23:19.234742] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.672 [2024-06-11 08:23:19.234750] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.672 [2024-06-11 08:23:19.237148] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.672 [2024-06-11 08:23:19.246378] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.672 [2024-06-11 08:23:19.246953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.672 [2024-06-11 08:23:19.247301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.672 [2024-06-11 08:23:19.247315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.672 [2024-06-11 08:23:19.247325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.672 [2024-06-11 08:23:19.247478] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.672 [2024-06-11 08:23:19.247589] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.672 [2024-06-11 08:23:19.247598] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.672 [2024-06-11 08:23:19.247605] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.672 [2024-06-11 08:23:19.249857] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.672 [2024-06-11 08:23:19.259000] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.672 [2024-06-11 08:23:19.259475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.672 [2024-06-11 08:23:19.259780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.672 [2024-06-11 08:23:19.259793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.672 [2024-06-11 08:23:19.259801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.672 [2024-06-11 08:23:19.259963] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.672 [2024-06-11 08:23:19.260089] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.672 [2024-06-11 08:23:19.260097] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.672 [2024-06-11 08:23:19.260105] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.672 [2024-06-11 08:23:19.262380] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.672 [2024-06-11 08:23:19.271583] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.672 [2024-06-11 08:23:19.272033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.672 [2024-06-11 08:23:19.272332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.672 [2024-06-11 08:23:19.272343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.672 [2024-06-11 08:23:19.272351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.672 [2024-06-11 08:23:19.272537] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.672 [2024-06-11 08:23:19.272681] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.673 [2024-06-11 08:23:19.272690] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.673 [2024-06-11 08:23:19.272697] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.673 [2024-06-11 08:23:19.275057] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.673 [2024-06-11 08:23:19.284247] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.673 [2024-06-11 08:23:19.284754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.673 [2024-06-11 08:23:19.285091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.673 [2024-06-11 08:23:19.285106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.673 [2024-06-11 08:23:19.285115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.673 [2024-06-11 08:23:19.285279] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.673 [2024-06-11 08:23:19.285408] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.673 [2024-06-11 08:23:19.285418] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.673 [2024-06-11 08:23:19.285426] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.673 [2024-06-11 08:23:19.287706] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.673 [2024-06-11 08:23:19.296596] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.673 [2024-06-11 08:23:19.297188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.673 [2024-06-11 08:23:19.297550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.673 [2024-06-11 08:23:19.297566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.673 [2024-06-11 08:23:19.297575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.673 [2024-06-11 08:23:19.297774] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.673 [2024-06-11 08:23:19.297940] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.673 [2024-06-11 08:23:19.297949] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.673 [2024-06-11 08:23:19.297957] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.673 [2024-06-11 08:23:19.300141] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1251814 Killed "${NVMF_APP[@]}" "$@" 00:30:48.673 08:23:19 -- host/bdevperf.sh@36 -- # tgt_init 00:30:48.673 08:23:19 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:48.673 08:23:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:48.673 08:23:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:48.673 08:23:19 -- common/autotest_common.sh@10 -- # set +x 00:30:48.673 [2024-06-11 08:23:19.309144] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.673 [2024-06-11 08:23:19.309633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.673 [2024-06-11 08:23:19.310017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.673 [2024-06-11 08:23:19.310029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.673 [2024-06-11 08:23:19.310037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.673 [2024-06-11 08:23:19.310146] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.673 [2024-06-11 08:23:19.310253] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.673 [2024-06-11 08:23:19.310262] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.673 [2024-06-11 08:23:19.310269] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.995 [2024-06-11 08:23:19.312317] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.995 08:23:19 -- nvmf/common.sh@469 -- # nvmfpid=1253511 00:30:48.995 08:23:19 -- nvmf/common.sh@470 -- # waitforlisten 1253511 00:30:48.995 08:23:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:48.995 08:23:19 -- common/autotest_common.sh@819 -- # '[' -z 1253511 ']' 00:30:48.995 08:23:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.995 08:23:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:48.995 08:23:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.995 08:23:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:48.995 08:23:19 -- common/autotest_common.sh@10 -- # set +x 00:30:48.995 [2024-06-11 08:23:19.321744] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.995 [2024-06-11 08:23:19.322236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.995 [2024-06-11 08:23:19.322500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.995 [2024-06-11 08:23:19.322512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.995 [2024-06-11 08:23:19.322520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.995 [2024-06-11 08:23:19.322719] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.995 [2024-06-11 08:23:19.322845] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.995 [2024-06-11 08:23:19.322853] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.995 [2024-06-11 08:23:19.322860] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.995 [2024-06-11 08:23:19.324980] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.995 [2024-06-11 08:23:19.334127] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.995 [2024-06-11 08:23:19.334472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.995 [2024-06-11 08:23:19.334789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.995 [2024-06-11 08:23:19.334800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.995 [2024-06-11 08:23:19.334808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.995 [2024-06-11 08:23:19.334953] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.995 [2024-06-11 08:23:19.335097] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.995 [2024-06-11 08:23:19.335106] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.995 [2024-06-11 08:23:19.335113] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.995 [2024-06-11 08:23:19.337279] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.995 [2024-06-11 08:23:19.346685] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.995 [2024-06-11 08:23:19.347260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.995 [2024-06-11 08:23:19.347608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.995 [2024-06-11 08:23:19.347624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.995 [2024-06-11 08:23:19.347634] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.995 [2024-06-11 08:23:19.347797] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.996 [2024-06-11 08:23:19.347946] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.996 [2024-06-11 08:23:19.347955] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.996 [2024-06-11 08:23:19.347962] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.996 [2024-06-11 08:23:19.350100] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.996 [2024-06-11 08:23:19.359413] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.996 [2024-06-11 08:23:19.359895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.360246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.360260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.996 [2024-06-11 08:23:19.360270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.996 [2024-06-11 08:23:19.360477] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.996 [2024-06-11 08:23:19.360643] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.996 [2024-06-11 08:23:19.360653] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.996 [2024-06-11 08:23:19.360661] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.996 [2024-06-11 08:23:19.362918] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.996 [2024-06-11 08:23:19.364618] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:48.996 [2024-06-11 08:23:19.364665] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:48.996 [2024-06-11 08:23:19.371778] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.996 [2024-06-11 08:23:19.372226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.372552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.372564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.996 [2024-06-11 08:23:19.372572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.996 [2024-06-11 08:23:19.372697] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.996 [2024-06-11 08:23:19.372823] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.996 [2024-06-11 08:23:19.372831] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.996 [2024-06-11 08:23:19.372838] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.996 [2024-06-11 08:23:19.375255] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.996 [2024-06-11 08:23:19.384363] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.996 [2024-06-11 08:23:19.384682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.384976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.384987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.996 [2024-06-11 08:23:19.384995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.996 [2024-06-11 08:23:19.385101] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.996 [2024-06-11 08:23:19.385225] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.996 [2024-06-11 08:23:19.385233] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.996 [2024-06-11 08:23:19.385241] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.996 [2024-06-11 08:23:19.387640] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.996 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.996 [2024-06-11 08:23:19.396969] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.996 [2024-06-11 08:23:19.397403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.397763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.397778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.996 [2024-06-11 08:23:19.397787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.996 [2024-06-11 08:23:19.397988] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.996 [2024-06-11 08:23:19.398135] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.996 [2024-06-11 08:23:19.398144] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.996 [2024-06-11 08:23:19.398151] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.996 [2024-06-11 08:23:19.400307] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.996 [2024-06-11 08:23:19.409650] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.996 [2024-06-11 08:23:19.410103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.410450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.410463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.996 [2024-06-11 08:23:19.410472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.996 [2024-06-11 08:23:19.410598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.996 [2024-06-11 08:23:19.410723] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.996 [2024-06-11 08:23:19.410731] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.996 [2024-06-11 08:23:19.410739] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.996 [2024-06-11 08:23:19.412969] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.996 [2024-06-11 08:23:19.422031] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.996 [2024-06-11 08:23:19.422525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.422730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.422741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.996 [2024-06-11 08:23:19.422749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.996 [2024-06-11 08:23:19.422874] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.996 [2024-06-11 08:23:19.423000] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.996 [2024-06-11 08:23:19.423009] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.996 [2024-06-11 08:23:19.423018] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.996 [2024-06-11 08:23:19.425218] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.996 [2024-06-11 08:23:19.434335] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.996 [2024-06-11 08:23:19.434759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.434952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.434962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.996 [2024-06-11 08:23:19.434970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.996 [2024-06-11 08:23:19.435133] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.996 [2024-06-11 08:23:19.435258] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.996 [2024-06-11 08:23:19.435267] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.996 [2024-06-11 08:23:19.435274] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.996 [2024-06-11 08:23:19.437689] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.996 [2024-06-11 08:23:19.445626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:48.996 [2024-06-11 08:23:19.446755] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.996 [2024-06-11 08:23:19.447390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.447717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.447733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.996 [2024-06-11 08:23:19.447742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.996 [2024-06-11 08:23:19.447888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.996 [2024-06-11 08:23:19.448054] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.996 [2024-06-11 08:23:19.448063] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.996 [2024-06-11 08:23:19.448070] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.996 [2024-06-11 08:23:19.450435] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.996 [2024-06-11 08:23:19.459160] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.996 [2024-06-11 08:23:19.459614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.459966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.459977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.996 [2024-06-11 08:23:19.459985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.996 [2024-06-11 08:23:19.460129] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.996 [2024-06-11 08:23:19.460274] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.996 [2024-06-11 08:23:19.460283] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.996 [2024-06-11 08:23:19.460291] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.996 [2024-06-11 08:23:19.462472] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.996 [2024-06-11 08:23:19.471813] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.996 [2024-06-11 08:23:19.472314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.472669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.472680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.996 [2024-06-11 08:23:19.472688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.996 [2024-06-11 08:23:19.472833] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.996 [2024-06-11 08:23:19.472996] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.996 [2024-06-11 08:23:19.473004] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.996 [2024-06-11 08:23:19.473013] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.996 [2024-06-11 08:23:19.475296] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.996 [2024-06-11 08:23:19.484068] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.996 [2024-06-11 08:23:19.484550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.484888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.484903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.996 [2024-06-11 08:23:19.484911] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.996 [2024-06-11 08:23:19.485073] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.996 [2024-06-11 08:23:19.485199] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.996 [2024-06-11 08:23:19.485208] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.996 [2024-06-11 08:23:19.485215] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.996 [2024-06-11 08:23:19.487678] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.996 [2024-06-11 08:23:19.496590] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.996 [2024-06-11 08:23:19.497201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.497568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.996 [2024-06-11 08:23:19.497583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.996 [2024-06-11 08:23:19.497593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.996 [2024-06-11 08:23:19.497637] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:48.996 [2024-06-11 08:23:19.497722] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:48.996 [2024-06-11 08:23:19.497728] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:48.996 [2024-06-11 08:23:19.497733] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:48.996 [2024-06-11 08:23:19.497797] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.996 [2024-06-11 08:23:19.497837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:48.996 [2024-06-11 08:23:19.497963] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.996 [2024-06-11 08:23:19.497972] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.996 [2024-06-11 08:23:19.497980] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.996 [2024-06-11 08:23:19.498000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.997 [2024-06-11 08:23:19.498002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:48.997 [2024-06-11 08:23:19.500289] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.997 [2024-06-11 08:23:19.508922] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.997 [2024-06-11 08:23:19.509548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.997 [2024-06-11 08:23:19.509900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.997 [2024-06-11 08:23:19.509914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.997 [2024-06-11 08:23:19.509924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.997 [2024-06-11 08:23:19.510126] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.997 [2024-06-11 08:23:19.510292] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.997 [2024-06-11 08:23:19.510302] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.997 [2024-06-11 08:23:19.510315] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.997 [2024-06-11 08:23:19.512669] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.997 [2024-06-11 08:23:19.521243] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.997 [2024-06-11 08:23:19.521807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.997 [2024-06-11 08:23:19.522148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.997 [2024-06-11 08:23:19.522162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.997 [2024-06-11 08:23:19.522171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.997 [2024-06-11 08:23:19.522298] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.997 [2024-06-11 08:23:19.522490] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.997 [2024-06-11 08:23:19.522501] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.997 [2024-06-11 08:23:19.522509] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.997 [2024-06-11 08:23:19.524766] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.997 [2024-06-11 08:23:19.533697] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.997 [2024-06-11 08:23:19.534219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.997 [2024-06-11 08:23:19.534496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.997 [2024-06-11 08:23:19.534512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.997 [2024-06-11 08:23:19.534522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.997 [2024-06-11 08:23:19.534648] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.997 [2024-06-11 08:23:19.534851] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.997 [2024-06-11 08:23:19.534861] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.997 [2024-06-11 08:23:19.534868] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.997 [2024-06-11 08:23:19.537056] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.997 [2024-06-11 08:23:19.546248] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.997 [2024-06-11 08:23:19.546941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.997 [2024-06-11 08:23:19.547337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.997 [2024-06-11 08:23:19.547351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.997 [2024-06-11 08:23:19.547360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.997 [2024-06-11 08:23:19.547511] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.997 [2024-06-11 08:23:19.547642] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.997 [2024-06-11 08:23:19.547651] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.997 [2024-06-11 08:23:19.547659] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.997 [2024-06-11 08:23:19.549846] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.997 [2024-06-11 08:23:19.558843] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.997 [2024-06-11 08:23:19.559476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.997 [2024-06-11 08:23:19.559701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.997 [2024-06-11 08:23:19.559714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.997 [2024-06-11 08:23:19.559724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.997 [2024-06-11 08:23:19.559886] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.997 [2024-06-11 08:23:19.560071] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.997 [2024-06-11 08:23:19.560080] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.997 [2024-06-11 08:23:19.560087] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.997 [2024-06-11 08:23:19.562219] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.997 [2024-06-11 08:23:19.571296] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.997 [2024-06-11 08:23:19.571854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.997 [2024-06-11 08:23:19.572214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.997 [2024-06-11 08:23:19.572228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.997 [2024-06-11 08:23:19.572238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.997 [2024-06-11 08:23:19.572400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.997 [2024-06-11 08:23:19.572553] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.997 [2024-06-11 08:23:19.572563] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.997 [2024-06-11 08:23:19.572571] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.997 [2024-06-11 08:23:19.574758] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.997 [2024-06-11 08:23:19.583689] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.997 [2024-06-11 08:23:19.584187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.997 [2024-06-11 08:23:19.584519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.997 [2024-06-11 08:23:19.584535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.997 [2024-06-11 08:23:19.584544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.997 [2024-06-11 08:23:19.584742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.997 [2024-06-11 08:23:19.584889] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.997 [2024-06-11 08:23:19.584898] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.997 [2024-06-11 08:23:19.584906] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.997 [2024-06-11 08:23:19.587019] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.997 [2024-06-11 08:23:19.596155] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.997 [2024-06-11 08:23:19.596737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.997 [2024-06-11 08:23:19.597076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.997 [2024-06-11 08:23:19.597091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.997 [2024-06-11 08:23:19.597101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.997 [2024-06-11 08:23:19.597283] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.997 [2024-06-11 08:23:19.597473] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.997 [2024-06-11 08:23:19.597483] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.997 [2024-06-11 08:23:19.597491] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.997 [2024-06-11 08:23:19.599930] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.997 [2024-06-11 08:23:19.608551] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.997 [2024-06-11 08:23:19.609010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.997 [2024-06-11 08:23:19.609400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.997 [2024-06-11 08:23:19.609410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.997 [2024-06-11 08:23:19.609418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.997 [2024-06-11 08:23:19.609566] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.997 [2024-06-11 08:23:19.609728] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.997 [2024-06-11 08:23:19.609737] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.997 [2024-06-11 08:23:19.609743] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.997 [2024-06-11 08:23:19.611826] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.997 [2024-06-11 08:23:19.621115] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.997 [2024-06-11 08:23:19.621726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.997 [2024-06-11 08:23:19.622079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.997 [2024-06-11 08:23:19.622093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.997 [2024-06-11 08:23:19.622102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.997 [2024-06-11 08:23:19.622227] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.997 [2024-06-11 08:23:19.622392] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.997 [2024-06-11 08:23:19.622401] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.997 [2024-06-11 08:23:19.622409] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.997 [2024-06-11 08:23:19.624655] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:48.997 [2024-06-11 08:23:19.633682] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.997 [2024-06-11 08:23:19.634096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.997 [2024-06-11 08:23:19.634320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:48.997 [2024-06-11 08:23:19.634333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:48.997 [2024-06-11 08:23:19.634343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:48.997 [2024-06-11 08:23:19.634475] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:48.997 [2024-06-11 08:23:19.634624] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:48.997 [2024-06-11 08:23:19.634633] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:48.997 [2024-06-11 08:23:19.634641] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.997 [2024-06-11 08:23:19.636840] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.317 [2024-06-11 08:23:19.646017] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.317 [2024-06-11 08:23:19.646671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-06-11 08:23:19.646894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-06-11 08:23:19.646910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.317 [2024-06-11 08:23:19.646920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.317 [2024-06-11 08:23:19.647119] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.317 [2024-06-11 08:23:19.647229] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.317 [2024-06-11 08:23:19.647239] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.317 [2024-06-11 08:23:19.647247] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.317 [2024-06-11 08:23:19.649463] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.317 [2024-06-11 08:23:19.658424] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.317 [2024-06-11 08:23:19.658893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-06-11 08:23:19.659231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-06-11 08:23:19.659242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.317 [2024-06-11 08:23:19.659250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.317 [2024-06-11 08:23:19.659394] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.317 [2024-06-11 08:23:19.659580] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.317 [2024-06-11 08:23:19.659590] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.317 [2024-06-11 08:23:19.659598] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.317 [2024-06-11 08:23:19.661974] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.317 [2024-06-11 08:23:19.670850] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.317 [2024-06-11 08:23:19.671422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-06-11 08:23:19.671777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-06-11 08:23:19.671795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.317 [2024-06-11 08:23:19.671805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.317 [2024-06-11 08:23:19.671967] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.317 [2024-06-11 08:23:19.672095] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.317 [2024-06-11 08:23:19.672104] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.317 [2024-06-11 08:23:19.672112] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.317 [2024-06-11 08:23:19.674522] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.317 [2024-06-11 08:23:19.683194] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.317 [2024-06-11 08:23:19.683619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-06-11 08:23:19.683853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-06-11 08:23:19.683866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.317 [2024-06-11 08:23:19.683876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.317 [2024-06-11 08:23:19.684039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.317 [2024-06-11 08:23:19.684225] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.317 [2024-06-11 08:23:19.684234] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.317 [2024-06-11 08:23:19.684242] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.317 [2024-06-11 08:23:19.686448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.317 [2024-06-11 08:23:19.695755] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.317 [2024-06-11 08:23:19.696342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-06-11 08:23:19.696692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-06-11 08:23:19.696707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.317 [2024-06-11 08:23:19.696717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.317 [2024-06-11 08:23:19.696860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.317 [2024-06-11 08:23:19.697043] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.317 [2024-06-11 08:23:19.697055] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.317 [2024-06-11 08:23:19.697063] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.317 [2024-06-11 08:23:19.699140] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.317 [2024-06-11 08:23:19.708198] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.317 [2024-06-11 08:23:19.708864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-06-11 08:23:19.709210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-06-11 08:23:19.709223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.318 [2024-06-11 08:23:19.709237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.318 [2024-06-11 08:23:19.709400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.318 [2024-06-11 08:23:19.709607] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.318 [2024-06-11 08:23:19.709618] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.318 [2024-06-11 08:23:19.709626] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.318 [2024-06-11 08:23:19.711918] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.318 [2024-06-11 08:23:19.720599] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.318 [2024-06-11 08:23:19.721119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.318 [2024-06-11 08:23:19.721462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.318 [2024-06-11 08:23:19.721473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.318 [2024-06-11 08:23:19.721481] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.318 [2024-06-11 08:23:19.721626] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.318 [2024-06-11 08:23:19.721751] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.318 [2024-06-11 08:23:19.721760] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.318 [2024-06-11 08:23:19.721767] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.318 [2024-06-11 08:23:19.724017] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.318 [2024-06-11 08:23:19.733177] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.318 [2024-06-11 08:23:19.733750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.318 [2024-06-11 08:23:19.734145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.318 [2024-06-11 08:23:19.734158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.318 [2024-06-11 08:23:19.734168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.318 [2024-06-11 08:23:19.734386] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.318 [2024-06-11 08:23:19.734575] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.318 [2024-06-11 08:23:19.734585] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.318 [2024-06-11 08:23:19.734593] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.318 [2024-06-11 08:23:19.736999] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.318 [2024-06-11 08:23:19.745685] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.318 [2024-06-11 08:23:19.746270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.318 [2024-06-11 08:23:19.746685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.318 [2024-06-11 08:23:19.746700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.318 [2024-06-11 08:23:19.746710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.318 [2024-06-11 08:23:19.746821] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.318 [2024-06-11 08:23:19.747029] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.318 [2024-06-11 08:23:19.747042] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.318 [2024-06-11 08:23:19.747054] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.318 [2024-06-11 08:23:19.749324] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.318 [2024-06-11 08:23:19.758303] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.318 [2024-06-11 08:23:19.758925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.318 [2024-06-11 08:23:19.759158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.318 [2024-06-11 08:23:19.759171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.318 [2024-06-11 08:23:19.759181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.318 [2024-06-11 08:23:19.759325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.318 [2024-06-11 08:23:19.759498] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.318 [2024-06-11 08:23:19.759509] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.318 [2024-06-11 08:23:19.759516] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.318 [2024-06-11 08:23:19.761845] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.318 [2024-06-11 08:23:19.770714] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.318 [2024-06-11 08:23:19.771222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.318 [2024-06-11 08:23:19.771603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.318 [2024-06-11 08:23:19.771619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.318 [2024-06-11 08:23:19.771628] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.318 [2024-06-11 08:23:19.771772] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.318 [2024-06-11 08:23:19.771901] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.318 [2024-06-11 08:23:19.771911] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.318 [2024-06-11 08:23:19.771918] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.318 [2024-06-11 08:23:19.774175] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.318 [2024-06-11 08:23:19.783170] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.318 [2024-06-11 08:23:19.783501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.318 [2024-06-11 08:23:19.783871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.318 [2024-06-11 08:23:19.783882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.318 [2024-06-11 08:23:19.783890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.318 [2024-06-11 08:23:19.783980] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.318 [2024-06-11 08:23:19.784147] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.318 [2024-06-11 08:23:19.784156] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.318 [2024-06-11 08:23:19.784164] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.318 [2024-06-11 08:23:19.786360] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.318 [2024-06-11 08:23:19.795750] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.318 [2024-06-11 08:23:19.796199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.318 [2024-06-11 08:23:19.796471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.318 [2024-06-11 08:23:19.796483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.318 [2024-06-11 08:23:19.796492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.318 [2024-06-11 08:23:19.796653] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.318 [2024-06-11 08:23:19.796816] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.318 [2024-06-11 08:23:19.796825] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.318 [2024-06-11 08:23:19.796832] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.318 [2024-06-11 08:23:19.799160] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.318 [2024-06-11 08:23:19.808120] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.318 [2024-06-11 08:23:19.808724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.318 [2024-06-11 08:23:19.809074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.318 [2024-06-11 08:23:19.809088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.318 [2024-06-11 08:23:19.809097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.318 [2024-06-11 08:23:19.809297] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.318 [2024-06-11 08:23:19.809426] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.318 [2024-06-11 08:23:19.809435] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.318 [2024-06-11 08:23:19.809451] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.318 [2024-06-11 08:23:19.811741] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.318 [2024-06-11 08:23:19.820706] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.318 [2024-06-11 08:23:19.821283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.318 [2024-06-11 08:23:19.821654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.318 [2024-06-11 08:23:19.821670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.318 [2024-06-11 08:23:19.821679] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.318 [2024-06-11 08:23:19.821879] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.318 [2024-06-11 08:23:19.822008] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.319 [2024-06-11 08:23:19.822021] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.319 [2024-06-11 08:23:19.822029] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.319 [2024-06-11 08:23:19.824121] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.319 [2024-06-11 08:23:19.832982] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.319 [2024-06-11 08:23:19.833542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.319 [2024-06-11 08:23:19.833895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.319 [2024-06-11 08:23:19.833909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.319 [2024-06-11 08:23:19.833919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.319 [2024-06-11 08:23:19.834101] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.319 [2024-06-11 08:23:19.834286] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.319 [2024-06-11 08:23:19.834296] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.319 [2024-06-11 08:23:19.834304] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.319 [2024-06-11 08:23:19.836606] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.319 [2024-06-11 08:23:19.845584] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.319 [2024-06-11 08:23:19.846105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.319 [2024-06-11 08:23:19.846287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.319 [2024-06-11 08:23:19.846299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.319 [2024-06-11 08:23:19.846309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.319 [2024-06-11 08:23:19.846497] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.319 [2024-06-11 08:23:19.846644] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.319 [2024-06-11 08:23:19.846653] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.319 [2024-06-11 08:23:19.846660] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.319 [2024-06-11 08:23:19.848823] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.319 [2024-06-11 08:23:19.858291] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.319 [2024-06-11 08:23:19.858794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.319 [2024-06-11 08:23:19.859121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.319 [2024-06-11 08:23:19.859131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.319 [2024-06-11 08:23:19.859139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.319 [2024-06-11 08:23:19.859264] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.319 [2024-06-11 08:23:19.859371] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.319 [2024-06-11 08:23:19.859378] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.319 [2024-06-11 08:23:19.859390] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.319 [2024-06-11 08:23:19.861608] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.319 [2024-06-11 08:23:19.870787] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.319 [2024-06-11 08:23:19.871333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.319 [2024-06-11 08:23:19.871680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.319 [2024-06-11 08:23:19.871695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.319 [2024-06-11 08:23:19.871705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.319 [2024-06-11 08:23:19.871849] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.319 [2024-06-11 08:23:19.872033] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.319 [2024-06-11 08:23:19.872042] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.319 [2024-06-11 08:23:19.872050] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.319 [2024-06-11 08:23:19.874272] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.319 [2024-06-11 08:23:19.883473] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.319 [2024-06-11 08:23:19.883915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.319 [2024-06-11 08:23:19.884105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.319 [2024-06-11 08:23:19.884115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.319 [2024-06-11 08:23:19.884123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.319 [2024-06-11 08:23:19.884267] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.319 [2024-06-11 08:23:19.884454] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.319 [2024-06-11 08:23:19.884463] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.319 [2024-06-11 08:23:19.884470] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.319 [2024-06-11 08:23:19.886680] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.319 [2024-06-11 08:23:19.895764] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.319 [2024-06-11 08:23:19.896233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.319 [2024-06-11 08:23:19.896539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.319 [2024-06-11 08:23:19.896550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.319 [2024-06-11 08:23:19.896557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.319 [2024-06-11 08:23:19.896665] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.319 [2024-06-11 08:23:19.896770] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.319 [2024-06-11 08:23:19.896778] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.319 [2024-06-11 08:23:19.896785] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.319 [2024-06-11 08:23:19.898938] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.319 [2024-06-11 08:23:19.908252] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.319 [2024-06-11 08:23:19.908867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.319 [2024-06-11 08:23:19.909214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.319 [2024-06-11 08:23:19.909227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.319 [2024-06-11 08:23:19.909237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.319 [2024-06-11 08:23:19.909363] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.319 [2024-06-11 08:23:19.909517] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.319 [2024-06-11 08:23:19.909526] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.319 [2024-06-11 08:23:19.909534] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.319 [2024-06-11 08:23:19.911733] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.319 [2024-06-11 08:23:19.920628] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.319 [2024-06-11 08:23:19.921195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.319 [2024-06-11 08:23:19.921538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.319 [2024-06-11 08:23:19.921552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.319 [2024-06-11 08:23:19.921562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.319 [2024-06-11 08:23:19.921706] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.319 [2024-06-11 08:23:19.921852] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.319 [2024-06-11 08:23:19.921861] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.319 [2024-06-11 08:23:19.921869] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.319 [2024-06-11 08:23:19.924038] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.319 [2024-06-11 08:23:19.933180] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.319 [2024-06-11 08:23:19.933616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.319 [2024-06-11 08:23:19.933799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.319 [2024-06-11 08:23:19.933809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.319 [2024-06-11 08:23:19.933817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.319 [2024-06-11 08:23:19.933962] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.319 [2024-06-11 08:23:19.934123] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.319 [2024-06-11 08:23:19.934132] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.319 [2024-06-11 08:23:19.934139] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.582 [2024-06-11 08:23:19.936293] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.582 [2024-06-11 08:23:19.945609] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.582 [2024-06-11 08:23:19.946190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.582 [2024-06-11 08:23:19.946527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.582 [2024-06-11 08:23:19.946540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.582 [2024-06-11 08:23:19.946550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.582 [2024-06-11 08:23:19.946675] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.582 [2024-06-11 08:23:19.946819] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.582 [2024-06-11 08:23:19.946828] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.582 [2024-06-11 08:23:19.946835] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.582 [2024-06-11 08:23:19.948960] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.582 [2024-06-11 08:23:19.958268] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.582 [2024-06-11 08:23:19.958732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.582 [2024-06-11 08:23:19.959050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.582 [2024-06-11 08:23:19.959059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.582 [2024-06-11 08:23:19.959067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.582 [2024-06-11 08:23:19.959212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.582 [2024-06-11 08:23:19.959354] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.582 [2024-06-11 08:23:19.959361] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.582 [2024-06-11 08:23:19.959368] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.582 [2024-06-11 08:23:19.961797] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.582 [2024-06-11 08:23:19.970867] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.582 [2024-06-11 08:23:19.971321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.582 [2024-06-11 08:23:19.971370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.582 [2024-06-11 08:23:19.971379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.582 [2024-06-11 08:23:19.971387] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.582 [2024-06-11 08:23:19.971535] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.582 [2024-06-11 08:23:19.971623] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.582 [2024-06-11 08:23:19.971630] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.582 [2024-06-11 08:23:19.971637] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.582 [2024-06-11 08:23:19.973920] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.582 [2024-06-11 08:23:19.983304] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.582 [2024-06-11 08:23:19.983945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.582 [2024-06-11 08:23:19.984282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.582 [2024-06-11 08:23:19.984296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.582 [2024-06-11 08:23:19.984305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.582 [2024-06-11 08:23:19.984476] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.582 [2024-06-11 08:23:19.984623] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.582 [2024-06-11 08:23:19.984631] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.582 [2024-06-11 08:23:19.984639] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.582 [2024-06-11 08:23:19.986785] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.582 [2024-06-11 08:23:19.995797] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.582 [2024-06-11 08:23:19.996252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.582 [2024-06-11 08:23:19.996445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.582 [2024-06-11 08:23:19.996455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.582 [2024-06-11 08:23:19.996463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.582 [2024-06-11 08:23:19.996607] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.582 [2024-06-11 08:23:19.996732] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.582 [2024-06-11 08:23:19.996741] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.582 [2024-06-11 08:23:19.996748] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.582 [2024-06-11 08:23:19.998990] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.582 [2024-06-11 08:23:20.008249] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.582 [2024-06-11 08:23:20.008620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.582 [2024-06-11 08:23:20.008965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.582 [2024-06-11 08:23:20.008974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.582 [2024-06-11 08:23:20.008982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.582 [2024-06-11 08:23:20.009145] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.582 [2024-06-11 08:23:20.009306] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.582 [2024-06-11 08:23:20.009314] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.582 [2024-06-11 08:23:20.009321] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.582 [2024-06-11 08:23:20.011814] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.582 [2024-06-11 08:23:20.020584] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.582 [2024-06-11 08:23:20.021014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.582 [2024-06-11 08:23:20.021212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.582 [2024-06-11 08:23:20.021227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.582 [2024-06-11 08:23:20.021234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.582 [2024-06-11 08:23:20.021395] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.582 [2024-06-11 08:23:20.021584] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.582 [2024-06-11 08:23:20.021593] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.582 [2024-06-11 08:23:20.021600] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.582 [2024-06-11 08:23:20.023922] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.582 [2024-06-11 08:23:20.033059] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.582 [2024-06-11 08:23:20.033524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.582 [2024-06-11 08:23:20.033851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.582 [2024-06-11 08:23:20.033861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.582 [2024-06-11 08:23:20.033869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.582 [2024-06-11 08:23:20.034030] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.582 [2024-06-11 08:23:20.034155] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.582 [2024-06-11 08:23:20.034163] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.582 [2024-06-11 08:23:20.034170] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.582 [2024-06-11 08:23:20.036345] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.582 [2024-06-11 08:23:20.045604] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.582 [2024-06-11 08:23:20.046135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.582 [2024-06-11 08:23:20.046447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.582 [2024-06-11 08:23:20.046457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.582 [2024-06-11 08:23:20.046465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.582 [2024-06-11 08:23:20.046626] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.582 [2024-06-11 08:23:20.046789] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.582 [2024-06-11 08:23:20.046797] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.582 [2024-06-11 08:23:20.046803] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.582 [2024-06-11 08:23:20.049166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.583 [2024-06-11 08:23:20.058010] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.583 [2024-06-11 08:23:20.058539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.583 [2024-06-11 08:23:20.058778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.583 [2024-06-11 08:23:20.058790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.583 [2024-06-11 08:23:20.058804] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.583 [2024-06-11 08:23:20.059022] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.583 [2024-06-11 08:23:20.059206] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.583 [2024-06-11 08:23:20.059214] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.583 [2024-06-11 08:23:20.059222] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.583 [2024-06-11 08:23:20.061490] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.583 [2024-06-11 08:23:20.070391] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.583 [2024-06-11 08:23:20.070953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.583 [2024-06-11 08:23:20.071306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.583 [2024-06-11 08:23:20.071318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.583 [2024-06-11 08:23:20.071328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.583 [2024-06-11 08:23:20.071517] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.583 [2024-06-11 08:23:20.071664] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.583 [2024-06-11 08:23:20.071672] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.583 [2024-06-11 08:23:20.071679] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.583 [2024-06-11 08:23:20.073937] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.583 [2024-06-11 08:23:20.082893] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.583 [2024-06-11 08:23:20.083490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.583 [2024-06-11 08:23:20.083854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.583 [2024-06-11 08:23:20.083867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.583 [2024-06-11 08:23:20.083878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.583 [2024-06-11 08:23:20.084059] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.583 [2024-06-11 08:23:20.084205] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.583 [2024-06-11 08:23:20.084213] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.583 [2024-06-11 08:23:20.084221] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.583 [2024-06-11 08:23:20.086354] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.583 [2024-06-11 08:23:20.095343] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.583 [2024-06-11 08:23:20.095899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.583 [2024-06-11 08:23:20.096114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.583 [2024-06-11 08:23:20.096127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.583 [2024-06-11 08:23:20.096136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.583 [2024-06-11 08:23:20.096357] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.583 [2024-06-11 08:23:20.096513] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.583 [2024-06-11 08:23:20.096522] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.583 [2024-06-11 08:23:20.096531] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.583 [2024-06-11 08:23:20.098894] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.583 [2024-06-11 08:23:20.107792] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.583 [2024-06-11 08:23:20.108269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.583 [2024-06-11 08:23:20.108651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.583 [2024-06-11 08:23:20.108663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.583 [2024-06-11 08:23:20.108671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.583 [2024-06-11 08:23:20.108778] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.583 [2024-06-11 08:23:20.108903] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.583 [2024-06-11 08:23:20.108910] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.583 [2024-06-11 08:23:20.108918] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.583 [2024-06-11 08:23:20.111151] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.583 [2024-06-11 08:23:20.120292] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.583 [2024-06-11 08:23:20.120879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.583 [2024-06-11 08:23:20.121124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.583 [2024-06-11 08:23:20.121137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.583 [2024-06-11 08:23:20.121147] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.583 [2024-06-11 08:23:20.121290] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.583 [2024-06-11 08:23:20.121436] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.583 [2024-06-11 08:23:20.121453] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.583 [2024-06-11 08:23:20.121461] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.583 [2024-06-11 08:23:20.123751] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.583 [2024-06-11 08:23:20.132645] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.583 08:23:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:49.583 [2024-06-11 08:23:20.133060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.583 08:23:20 -- common/autotest_common.sh@852 -- # return 0 00:30:49.583 [2024-06-11 08:23:20.133304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.583 [2024-06-11 08:23:20.133317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.583 [2024-06-11 08:23:20.133326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.583 [2024-06-11 08:23:20.133484] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.583 08:23:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:49.583 [2024-06-11 08:23:20.133687] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.583 [2024-06-11 08:23:20.133696] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.583 [2024-06-11 08:23:20.133703] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.583 08:23:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:49.583 08:23:20 -- common/autotest_common.sh@10 -- # set +x 00:30:49.583 [2024-06-11 08:23:20.136085] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.583 [2024-06-11 08:23:20.145322] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.583 [2024-06-11 08:23:20.145806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.583 [2024-06-11 08:23:20.146131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.583 [2024-06-11 08:23:20.146141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.583 [2024-06-11 08:23:20.146149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.583 [2024-06-11 08:23:20.146310] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.583 [2024-06-11 08:23:20.146477] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.583 [2024-06-11 08:23:20.146485] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.583 [2024-06-11 08:23:20.146492] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.583 [2024-06-11 08:23:20.148593] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.583 [2024-06-11 08:23:20.158061] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.583 [2024-06-11 08:23:20.158586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.583 [2024-06-11 08:23:20.159007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.583 [2024-06-11 08:23:20.159020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.583 [2024-06-11 08:23:20.159030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.583 [2024-06-11 08:23:20.159173] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.583 [2024-06-11 08:23:20.159337] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.583 [2024-06-11 08:23:20.159346] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.583 [2024-06-11 08:23:20.159355] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.583 [2024-06-11 08:23:20.161562] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.583 [2024-06-11 08:23:20.170518] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.583 [2024-06-11 08:23:20.170971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.584 [2024-06-11 08:23:20.171278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.584 [2024-06-11 08:23:20.171288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.584 [2024-06-11 08:23:20.171296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.584 [2024-06-11 08:23:20.171426] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.584 [2024-06-11 08:23:20.171594] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.584 [2024-06-11 08:23:20.171603] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.584 [2024-06-11 08:23:20.171610] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.584 08:23:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.584 08:23:20 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:49.584 [2024-06-11 08:23:20.173862] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.584 08:23:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.584 08:23:20 -- common/autotest_common.sh@10 -- # set +x 00:30:49.584 [2024-06-11 08:23:20.180917] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.584 [2024-06-11 08:23:20.183116] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.584 [2024-06-11 08:23:20.183567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.584 [2024-06-11 08:23:20.183810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.584 [2024-06-11 08:23:20.183820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.584 [2024-06-11 08:23:20.183827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.584 [2024-06-11 08:23:20.183952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.584 [2024-06-11 08:23:20.184130] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.584 [2024-06-11 08:23:20.184138] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.584 [2024-06-11 08:23:20.184145] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.584 08:23:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.584 08:23:20 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:49.584 [2024-06-11 08:23:20.186471] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.584 08:23:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.584 08:23:20 -- common/autotest_common.sh@10 -- # set +x 00:30:49.584 [2024-06-11 08:23:20.195430] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.584 [2024-06-11 08:23:20.195907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.584 [2024-06-11 08:23:20.196238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.584 [2024-06-11 08:23:20.196250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.584 [2024-06-11 08:23:20.196260] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.584 [2024-06-11 08:23:20.196447] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.584 [2024-06-11 08:23:20.196594] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.584 [2024-06-11 08:23:20.196602] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.584 [2024-06-11 08:23:20.196610] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.584 [2024-06-11 08:23:20.198834] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.584 [2024-06-11 08:23:20.207855] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.584 [2024-06-11 08:23:20.208350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.584 [2024-06-11 08:23:20.208536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.584 [2024-06-11 08:23:20.208546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.584 [2024-06-11 08:23:20.208554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.584 [2024-06-11 08:23:20.208734] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.584 [2024-06-11 08:23:20.208896] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.584 [2024-06-11 08:23:20.208904] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.584 [2024-06-11 08:23:20.208910] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.584 [2024-06-11 08:23:20.211063] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.584 Malloc0 00:30:49.584 08:23:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.584 08:23:20 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:49.584 08:23:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.584 08:23:20 -- common/autotest_common.sh@10 -- # set +x 00:30:49.584 [2024-06-11 08:23:20.220188] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.584 [2024-06-11 08:23:20.220696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.584 [2024-06-11 08:23:20.221028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.584 [2024-06-11 08:23:20.221038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.584 [2024-06-11 08:23:20.221046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.584 [2024-06-11 08:23:20.221190] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.584 [2024-06-11 08:23:20.221351] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.584 [2024-06-11 08:23:20.221358] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.584 [2024-06-11 08:23:20.221365] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.584 [2024-06-11 08:23:20.223763] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.845 08:23:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.845 08:23:20 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:49.845 08:23:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.845 08:23:20 -- common/autotest_common.sh@10 -- # set +x 00:30:49.845 [2024-06-11 08:23:20.232696] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.845 [2024-06-11 08:23:20.233146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.845 [2024-06-11 08:23:20.233450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.845 [2024-06-11 08:23:20.233461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ba450 with addr=10.0.0.2, port=4420 00:30:49.845 [2024-06-11 08:23:20.233469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba450 is same with the state(5) to be set 00:30:49.845 [2024-06-11 08:23:20.233610] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ba450 (9): Bad file descriptor 00:30:49.845 [2024-06-11 08:23:20.233753] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:49.845 [2024-06-11 08:23:20.233765] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:49.845 [2024-06-11 08:23:20.233772] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:49.845 [2024-06-11 08:23:20.236129] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:49.845 08:23:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.845 08:23:20 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:49.845 08:23:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.845 08:23:20 -- common/autotest_common.sh@10 -- # set +x 00:30:49.845 [2024-06-11 08:23:20.244025] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.845 [2024-06-11 08:23:20.245168] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:49.845 08:23:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.845 08:23:20 -- host/bdevperf.sh@38 -- # wait 1252186 00:30:49.845 [2024-06-11 08:23:20.284021] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:58.005 00:30:58.005 Latency(us) 00:30:58.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:58.005 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:58.005 Verification LBA range: start 0x0 length 0x4000 00:30:58.005 Nvme1n1 : 15.00 14296.48 55.85 14801.81 0.00 4383.96 512.00 21299.20 00:30:58.006 =================================================================================================================== 00:30:58.006 Total : 14296.48 55.85 14801.81 0.00 4383.96 512.00 21299.20 00:30:58.267 08:23:28 -- host/bdevperf.sh@39 -- # sync 00:30:58.267 08:23:28 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:58.267 08:23:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:58.267 08:23:28 -- common/autotest_common.sh@10 -- # set +x 00:30:58.267 08:23:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:58.267 08:23:28 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:58.267 08:23:28 -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:58.267 08:23:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:58.267 08:23:28 -- nvmf/common.sh@116 -- # sync 00:30:58.267 08:23:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:58.267 08:23:28 -- nvmf/common.sh@119 -- # set +e 00:30:58.267 08:23:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:58.267 08:23:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:58.267 rmmod nvme_tcp 00:30:58.267 rmmod nvme_fabrics 00:30:58.267 rmmod nvme_keyring 00:30:58.267 08:23:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:58.267 08:23:28 -- nvmf/common.sh@123 -- # set -e 00:30:58.267 08:23:28 -- nvmf/common.sh@124 -- # return 0 00:30:58.267 08:23:28 -- nvmf/common.sh@477 -- # '[' -n 1253511 ']' 00:30:58.267 08:23:28 -- nvmf/common.sh@478 -- # killprocess 1253511 00:30:58.267 08:23:28 -- common/autotest_common.sh@926 -- # '[' -z 1253511 ']' 00:30:58.267 08:23:28 -- common/autotest_common.sh@930 -- # kill -0 1253511 00:30:58.267 08:23:28 -- common/autotest_common.sh@931 -- # uname 00:30:58.267 08:23:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:58.267 08:23:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1253511 00:30:58.267 08:23:28 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:58.267 08:23:28 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:58.267 08:23:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1253511' 00:30:58.267 killing process with pid 1253511 00:30:58.527 08:23:28 -- common/autotest_common.sh@945 -- # kill 1253511 00:30:58.527 08:23:28 -- common/autotest_common.sh@950 -- # wait 1253511 00:30:58.527 08:23:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:58.527 08:23:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:58.527 08:23:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:58.527 08:23:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:58.527 08:23:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:58.527 08:23:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.527 08:23:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:58.527 08:23:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.074 08:23:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:01.074 00:31:01.074 real 0m27.647s 00:31:01.074 user 1m2.894s 00:31:01.074 sys 0m6.883s 00:31:01.074 08:23:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:01.074 08:23:31 -- common/autotest_common.sh@10 -- # set +x 00:31:01.074 ************************************ 00:31:01.074 END TEST nvmf_bdevperf 00:31:01.074 ************************************ 00:31:01.074 08:23:31 -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:01.074 08:23:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:01.074 08:23:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:01.074 08:23:31 -- common/autotest_common.sh@10 -- # set +x 00:31:01.074 ************************************ 00:31:01.074 START TEST nvmf_target_disconnect 00:31:01.074 ************************************ 00:31:01.074 08:23:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:01.074 * Looking for test storage... 00:31:01.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:01.074 08:23:31 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:01.074 08:23:31 -- nvmf/common.sh@7 -- # uname -s 00:31:01.074 08:23:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:01.074 08:23:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:01.074 08:23:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:01.074 08:23:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:01.074 08:23:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:01.074 08:23:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:01.074 08:23:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:01.074 08:23:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:01.074 08:23:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:01.074 08:23:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:01.074 08:23:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:01.074 08:23:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:01.074 08:23:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:01.074 08:23:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:01.074 08:23:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:01.074 08:23:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:01.074 08:23:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:01.074 08:23:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:01.074 08:23:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:01.074 08:23:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.075 08:23:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.075 08:23:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.075 08:23:31 -- paths/export.sh@5 -- # export PATH 00:31:01.075 08:23:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.075 08:23:31 -- nvmf/common.sh@46 -- # : 0 00:31:01.075 08:23:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:01.075 08:23:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:01.075 08:23:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:01.075 08:23:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:01.075 08:23:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:01.075 08:23:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:01.075 08:23:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:01.075 08:23:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:01.075 08:23:31 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:01.075 08:23:31 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:01.075 08:23:31 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:01.075 08:23:31 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:31:01.075 08:23:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:01.075 08:23:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:01.075 08:23:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:01.075 08:23:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:01.075 08:23:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:01.075 08:23:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.075 08:23:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:01.075 08:23:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:01.075 08:23:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:01.075 08:23:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:01.075 08:23:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:01.075 08:23:31 -- common/autotest_common.sh@10 -- # set +x 00:31:07.659 08:23:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:07.659 08:23:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:07.659 08:23:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:07.659 08:23:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:07.659 08:23:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:07.659 08:23:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:07.659 08:23:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:07.659 08:23:38 -- nvmf/common.sh@294 -- # net_devs=() 00:31:07.659 08:23:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:07.659 08:23:38 -- nvmf/common.sh@295 -- # e810=() 00:31:07.659 08:23:38 -- nvmf/common.sh@295 -- # local -ga e810 00:31:07.659 08:23:38 -- nvmf/common.sh@296 -- # x722=() 00:31:07.659 08:23:38 -- nvmf/common.sh@296 -- # local -ga x722 00:31:07.659 08:23:38 -- nvmf/common.sh@297 -- # mlx=() 00:31:07.659 08:23:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:07.659 08:23:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:07.659 08:23:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:07.659 08:23:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:07.659 08:23:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:07.659 08:23:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:07.659 08:23:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:07.659 08:23:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:07.659 08:23:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:07.659 08:23:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:07.659 08:23:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:07.659 08:23:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:07.659 08:23:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:07.659 08:23:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:07.659 08:23:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:07.659 08:23:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:07.659 08:23:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:07.659 08:23:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:07.659 08:23:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:07.659 08:23:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:07.659 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:07.659 08:23:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:07.659 08:23:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:07.659 08:23:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.659 08:23:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.659 08:23:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:07.659 08:23:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:07.659 08:23:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:07.659 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:07.659 08:23:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:07.659 08:23:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:07.659 08:23:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.659 08:23:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.659 08:23:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:07.659 08:23:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:07.659 08:23:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:07.659 08:23:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:07.659 08:23:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:07.659 08:23:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.659 08:23:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:07.659 08:23:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.659 08:23:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:07.659 Found net devices under 0000:31:00.0: cvl_0_0 00:31:07.659 08:23:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.659 08:23:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:07.659 08:23:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.659 08:23:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:07.659 08:23:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.659 08:23:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:07.659 Found net devices under 0000:31:00.1: cvl_0_1 00:31:07.659 08:23:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.659 08:23:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:07.659 08:23:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:07.659 08:23:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:07.659 08:23:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:07.659 08:23:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:07.659 08:23:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:07.659 08:23:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:07.659 08:23:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:07.659 08:23:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:07.659 08:23:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:07.659 08:23:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:07.659 08:23:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:07.659 08:23:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:07.659 08:23:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:07.659 08:23:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:07.659 08:23:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:07.659 08:23:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:07.659 08:23:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:07.920 08:23:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:07.920 08:23:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:07.920 08:23:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:07.920 08:23:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:07.920 08:23:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:07.920 08:23:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:07.920 08:23:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:07.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:07.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:31:07.920 00:31:07.920 --- 10.0.0.2 ping statistics --- 00:31:07.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.920 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:31:08.182 08:23:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:08.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:08.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:31:08.182 00:31:08.182 --- 10.0.0.1 ping statistics --- 00:31:08.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.182 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:31:08.182 08:23:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:08.182 08:23:38 -- nvmf/common.sh@410 -- # return 0 00:31:08.182 08:23:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:08.182 08:23:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:08.182 08:23:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:08.182 08:23:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:08.182 08:23:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:08.182 08:23:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:08.182 08:23:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:08.182 08:23:38 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:08.182 08:23:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:08.182 08:23:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:08.182 08:23:38 -- common/autotest_common.sh@10 -- # set +x 00:31:08.182 ************************************ 00:31:08.182 START TEST nvmf_target_disconnect_tc1 00:31:08.182 ************************************ 00:31:08.182 08:23:38 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:31:08.182 08:23:38 -- host/target_disconnect.sh@32 -- # set +e 00:31:08.182 08:23:38 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:08.182 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.182 [2024-06-11 08:23:38.708108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.182 [2024-06-11 08:23:38.708356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.182 [2024-06-11 08:23:38.708371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe47310 with addr=10.0.0.2, port=4420 00:31:08.182 [2024-06-11 08:23:38.708395] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:08.182 [2024-06-11 08:23:38.708405] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:08.182 [2024-06-11 08:23:38.708412] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:31:08.182 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:08.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:08.182 Initializing NVMe Controllers 00:31:08.182 08:23:38 -- host/target_disconnect.sh@33 -- # trap - ERR 00:31:08.182 08:23:38 -- host/target_disconnect.sh@33 -- # print_backtrace 00:31:08.182 08:23:38 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:31:08.182 08:23:38 -- common/autotest_common.sh@1132 -- # return 0 00:31:08.182 08:23:38 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:31:08.182 08:23:38 -- host/target_disconnect.sh@41 -- # set -e 00:31:08.182 00:31:08.182 real 0m0.105s 00:31:08.182 user 0m0.045s 00:31:08.182 sys 0m0.058s 00:31:08.182 08:23:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:08.182 08:23:38 -- common/autotest_common.sh@10 -- # set +x 00:31:08.182 ************************************ 00:31:08.182 END TEST nvmf_target_disconnect_tc1 00:31:08.182 ************************************ 00:31:08.182 08:23:38 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:08.182 08:23:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:08.182 08:23:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:08.182 08:23:38 -- common/autotest_common.sh@10 -- # set +x 00:31:08.182 ************************************ 00:31:08.182 START TEST nvmf_target_disconnect_tc2 00:31:08.182 ************************************ 00:31:08.182 08:23:38 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:31:08.182 08:23:38 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:31:08.182 08:23:38 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:08.182 08:23:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:08.182 08:23:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:08.182 08:23:38 -- common/autotest_common.sh@10 -- # set +x 00:31:08.182 08:23:38 -- nvmf/common.sh@469 -- # nvmfpid=1259581 00:31:08.182 08:23:38 -- nvmf/common.sh@470 -- # waitforlisten 1259581 00:31:08.182 08:23:38 -- common/autotest_common.sh@819 -- # '[' -z 1259581 ']' 00:31:08.182 08:23:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:08.182 08:23:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.182 08:23:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:08.182 08:23:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.182 08:23:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:08.182 08:23:38 -- common/autotest_common.sh@10 -- # set +x 00:31:08.444 [2024-06-11 08:23:38.831166] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:08.444 [2024-06-11 08:23:38.831232] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:08.444 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.444 [2024-06-11 08:23:38.918567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:08.444 [2024-06-11 08:23:39.011856] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:08.444 [2024-06-11 08:23:39.012015] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:08.444 [2024-06-11 08:23:39.012025] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:08.444 [2024-06-11 08:23:39.012032] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:08.444 [2024-06-11 08:23:39.012540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:08.444 [2024-06-11 08:23:39.012669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:08.444 [2024-06-11 08:23:39.012831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:08.444 [2024-06-11 08:23:39.012831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:31:09.014 08:23:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:09.014 08:23:39 -- common/autotest_common.sh@852 -- # return 0 00:31:09.014 08:23:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:09.014 08:23:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:09.014 08:23:39 -- common/autotest_common.sh@10 -- # set +x 00:31:09.014 08:23:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:09.014 08:23:39 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:09.014 08:23:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.014 08:23:39 -- common/autotest_common.sh@10 -- # set +x 00:31:09.274 Malloc0 00:31:09.274 08:23:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.274 08:23:39 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:09.274 08:23:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.274 08:23:39 -- common/autotest_common.sh@10 -- # set +x 00:31:09.274 [2024-06-11 08:23:39.685682] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:09.274 08:23:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.275 08:23:39 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:09.275 08:23:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.275 08:23:39 -- common/autotest_common.sh@10 -- # set +x 00:31:09.275 08:23:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.275 08:23:39 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:09.275 08:23:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.275 08:23:39 -- common/autotest_common.sh@10 -- # set +x 00:31:09.275 08:23:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.275 08:23:39 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:09.275 08:23:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.275 08:23:39 -- common/autotest_common.sh@10 -- # set +x 00:31:09.275 [2024-06-11 08:23:39.726041] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:09.275 08:23:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.275 08:23:39 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:09.275 08:23:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.275 08:23:39 -- common/autotest_common.sh@10 -- # set +x 00:31:09.275 08:23:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.275 08:23:39 -- host/target_disconnect.sh@50 -- # reconnectpid=1259711 00:31:09.275 08:23:39 -- host/target_disconnect.sh@52 -- # sleep 2 00:31:09.275 08:23:39 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:09.275 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.190 08:23:41 -- host/target_disconnect.sh@53 -- # kill -9 1259581 00:31:11.190 08:23:41 -- host/target_disconnect.sh@55 -- # sleep 2 00:31:11.190 Read completed with error (sct=0, sc=8) 00:31:11.190 starting I/O failed 00:31:11.190 Read completed with error (sct=0, sc=8) 00:31:11.190 starting I/O failed 00:31:11.191 Read completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Read completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Read completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Read completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Read completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Read completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Read completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Write completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Read completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Write completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Read completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Read completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Write completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Write completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Write completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Read completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Read completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Write completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Write completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Write completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Write completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Read completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Read completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Read completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Read completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Read completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Write completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Read completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Write completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 Read completed with error (sct=0, sc=8) 00:31:11.191 starting I/O failed 00:31:11.191 [2024-06-11 08:23:41.758476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:11.191 [2024-06-11 08:23:41.758866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.759229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.759242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-06-11 08:23:41.759354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.759790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.759825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-06-11 08:23:41.760201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.760674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.760709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-06-11 08:23:41.761003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.761241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.761251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-06-11 08:23:41.761694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.762009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.762022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-06-11 08:23:41.762308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.762739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.762750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-06-11 08:23:41.762939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.763260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.763269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-06-11 08:23:41.763484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.763799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.763808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-06-11 08:23:41.764141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.764443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.764453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-06-11 08:23:41.764776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.765070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.765079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-06-11 08:23:41.765417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.765783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.765793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-06-11 08:23:41.766071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.766365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.766375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-06-11 08:23:41.766680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.766961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.766970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-06-11 08:23:41.767265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.767490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.767500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-06-11 08:23:41.767841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.768170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.768180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-06-11 08:23:41.768482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.768879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.768889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-06-11 08:23:41.769129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.769458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.769468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-06-11 08:23:41.769667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.769988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.769998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-06-11 08:23:41.770347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.770534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.770545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-06-11 08:23:41.770778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.771022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.771032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.191 qpair failed and we were unable to recover it. 00:31:11.191 [2024-06-11 08:23:41.771352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.191 [2024-06-11 08:23:41.771466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.771476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.771858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.772160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.772169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.772308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.772606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.772616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.772901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.773205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.773215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.773559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.773777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.773787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.774124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.774423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.774432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.774846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.775187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.775196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.775531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.775877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.775886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.776151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.776345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.776354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.776693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.776991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.777000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.777333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.777585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.777594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.777963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.778249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.778258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.778572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.778761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.778771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.779060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.779405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.779414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.779705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.779996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.780005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.780386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.780680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.780689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.781001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.781291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.781302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.781623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.781922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.781932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.782256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.782552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.782563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.782908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.783198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.783209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.783542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.783832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.783843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.784180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.784446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.784457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.784767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.785106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.785118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.785457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.785747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.785758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.786102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.786450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.786462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.787166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.787483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.787496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.787823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.788159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.788171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.788559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.788868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.788880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.789068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.789351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.192 [2024-06-11 08:23:41.789362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.192 qpair failed and we were unable to recover it. 00:31:11.192 [2024-06-11 08:23:41.789670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.789998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.790009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.790161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.790449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.790461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.790637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.790967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.790979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.791306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.791609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.791622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.792896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.793210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.793229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.793481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.793861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.793876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.794205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.794510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.794526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.794932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.795240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.795255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.795584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.795789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.795806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.796005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.796366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.796381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.796726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.797067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.797082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.797262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.797553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.797569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.797898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.798250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.798266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.798593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.798822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.798837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.799145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.799467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.799483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.799804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.800165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.800181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.800484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.800702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.800717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.801011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.801291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.801305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.801663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.801991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.802006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.802212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.802485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.802500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.802858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.803145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.803160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.803458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.803751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.803766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.804086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.804410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.804425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.804737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.805044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.805060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.805383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.805709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.805728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.806057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.806245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.806267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.806595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.806911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.806930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.807270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.807589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.807609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.193 qpair failed and we were unable to recover it. 00:31:11.193 [2024-06-11 08:23:41.807930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.808248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.193 [2024-06-11 08:23:41.808268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.808595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.808893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.808911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.809247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.809591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.809611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.810009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.810348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.810366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.810687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.811011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.811029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.811275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.811597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.811617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.812007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.812328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.812348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.812669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.812982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.813002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.813317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.813607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.813627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.813855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.814172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.814191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.814493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.814810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.814829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.815048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.815379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.815397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.815780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.816119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.816138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.816486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.816792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.816811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.817190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.817495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.817515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.818458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.818807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.818842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.819163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.819459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.819489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.819854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.820167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.820193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.820514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.820880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.820906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.821249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.821595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.821622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.822009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.822207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.822236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.822590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.822931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.822957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.823271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.823513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.823541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.823843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.824197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.824223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.824466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.824685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.824716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.825059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.825382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.825413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.825670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.826008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.826034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.826290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.826613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.826641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.826989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.827312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.194 [2024-06-11 08:23:41.827339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.194 qpair failed and we were unable to recover it. 00:31:11.194 [2024-06-11 08:23:41.827558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.195 [2024-06-11 08:23:41.827974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.195 [2024-06-11 08:23:41.828001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.195 qpair failed and we were unable to recover it. 00:31:11.195 [2024-06-11 08:23:41.828330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.195 [2024-06-11 08:23:41.828685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.195 [2024-06-11 08:23:41.828713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.195 qpair failed and we were unable to recover it. 00:31:11.195 [2024-06-11 08:23:41.829062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.195 [2024-06-11 08:23:41.829417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.195 [2024-06-11 08:23:41.829452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.195 qpair failed and we were unable to recover it. 00:31:11.195 [2024-06-11 08:23:41.829809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.195 [2024-06-11 08:23:41.830212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.195 [2024-06-11 08:23:41.830239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.195 qpair failed and we were unable to recover it. 00:31:11.195 [2024-06-11 08:23:41.830458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.195 [2024-06-11 08:23:41.830814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.195 [2024-06-11 08:23:41.830841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.195 qpair failed and we were unable to recover it. 00:31:11.195 [2024-06-11 08:23:41.831183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.195 [2024-06-11 08:23:41.831544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.195 [2024-06-11 08:23:41.831572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.195 qpair failed and we were unable to recover it. 00:31:11.195 [2024-06-11 08:23:41.831930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.195 [2024-06-11 08:23:41.832254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.195 [2024-06-11 08:23:41.832285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.195 qpair failed and we were unable to recover it. 00:31:11.195 [2024-06-11 08:23:41.832650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.195 [2024-06-11 08:23:41.832879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.195 [2024-06-11 08:23:41.832908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.195 qpair failed and we were unable to recover it. 00:31:11.195 [2024-06-11 08:23:41.833165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.195 [2024-06-11 08:23:41.833509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.195 [2024-06-11 08:23:41.833536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.195 qpair failed and we were unable to recover it. 00:31:11.465 [2024-06-11 08:23:41.833883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.834243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.834270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-06-11 08:23:41.835363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.835611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.835642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-06-11 08:23:41.835886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.836237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.836263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-06-11 08:23:41.836687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.837081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.837107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-06-11 08:23:41.837490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.837833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.837860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-06-11 08:23:41.838180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.838517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.838545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-06-11 08:23:41.838928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.839280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.839306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-06-11 08:23:41.839685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.840012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.840038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-06-11 08:23:41.840312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.840634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.840662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-06-11 08:23:41.840994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.841198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.841226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-06-11 08:23:41.841592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.841913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.841939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-06-11 08:23:41.842295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.842661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.842688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-06-11 08:23:41.843099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.843436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.843472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-06-11 08:23:41.843823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.844170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.844196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-06-11 08:23:41.844558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.844892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.844919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.465 [2024-06-11 08:23:41.845239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.845602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.465 [2024-06-11 08:23:41.845629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.465 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.845969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.846329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.846355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.846602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.846827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.846854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.847210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.847537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.847564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.847906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.848254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.848281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.848628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.848991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.849018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.849243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.849590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.849617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.849875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.850187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.850212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.850545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.850850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.850875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.851194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.851405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.851430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.851786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.852010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.852036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.852363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.852765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.852793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.853027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.853330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.853356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.853710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.854047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.854074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.854419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.854756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.854783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.855108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.855422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.855462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.855800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.856114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.856139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.856460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.856670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.856698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.857018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.857372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.857398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.857740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.858052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.858079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.858427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.858798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.858825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.859162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.859435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.859478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.859843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.860170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.860196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.466 qpair failed and we were unable to recover it. 00:31:11.466 [2024-06-11 08:23:41.860559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.466 [2024-06-11 08:23:41.860902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.860928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.861297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.861608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.861634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.861972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.862322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.862347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.862707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.863096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.863122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.863479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.863849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.863874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.864221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.864549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.864576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.864912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.865257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.865284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.865623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.865962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.865991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.866342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.866690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.866724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.867092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.867415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.867448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.867849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.868207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.868233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.868495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.868837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.868863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.869179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.869499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.869526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.869854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.870191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.870217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.870447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.870759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.870785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.871171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.871515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.871542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.871729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.872037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.872062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.872392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.872709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.872737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.873120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.873384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.873411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.873763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.874012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.874050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.874428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.874785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.874813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.875149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.875481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.875509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.875847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.876172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.876198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.876556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.876808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.876834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.877211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.877566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.877593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.878008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.878353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.878380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.878745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.879066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.879092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.879334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.879623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.879650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.879981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.881077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.881121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.881466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.881821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.467 [2024-06-11 08:23:41.881848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.467 qpair failed and we were unable to recover it. 00:31:11.467 [2024-06-11 08:23:41.882235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.882469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.882500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.882862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.883111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.883137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.883494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.883803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.883831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.884069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.884175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.884202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.884574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.884906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.884932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.885359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.885729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.885756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.886130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.886471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.886497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.886846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.887165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.887191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.887524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.887897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.887923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.888175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.888511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.888538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.888890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.889233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.889259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.889597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.889957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.889983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.890239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.890591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.890618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.890977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.891321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.891347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.891553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.891923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.891950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.892299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.892618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.892645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.893001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.893229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.893259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.893519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.893852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.893879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.894096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.894319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.894344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.894676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.895011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.895038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.895382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.895746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.895773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.896118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.896478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.896506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.896862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.897181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.897207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.897540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.897893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.897919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.898274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.898613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.898639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.899012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.899237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.899263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.468 [2024-06-11 08:23:41.899599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.899901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.468 [2024-06-11 08:23:41.899926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.468 qpair failed and we were unable to recover it. 00:31:11.469 [2024-06-11 08:23:41.900258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-06-11 08:23:41.900588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-06-11 08:23:41.900615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-06-11 08:23:41.900965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-06-11 08:23:41.901311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-06-11 08:23:41.901336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-06-11 08:23:41.901696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-06-11 08:23:41.902002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-06-11 08:23:41.902027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-06-11 08:23:41.902365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-06-11 08:23:41.902706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-06-11 08:23:41.902732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-06-11 08:23:41.903099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-06-11 08:23:41.903418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.469 [2024-06-11 08:23:41.903470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.469 qpair failed and we were unable to recover it. 00:31:11.469 [2024-06-11 08:23:41.903796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-06-11 08:23:41.904136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-06-11 08:23:41.904162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-06-11 08:23:41.904483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-06-11 08:23:41.904833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-06-11 08:23:41.904858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-06-11 08:23:41.905188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-06-11 08:23:41.905550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-06-11 08:23:41.905575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-06-11 08:23:41.905893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-06-11 08:23:41.906200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-06-11 08:23:41.906226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.470 [2024-06-11 08:23:41.906479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-06-11 08:23:41.906825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.470 [2024-06-11 08:23:41.906851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.470 qpair failed and we were unable to recover it. 00:31:11.471 [2024-06-11 08:23:41.907226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-06-11 08:23:41.907557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-06-11 08:23:41.907584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-06-11 08:23:41.907943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-06-11 08:23:41.908275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-06-11 08:23:41.908301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-06-11 08:23:41.908528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-06-11 08:23:41.908873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-06-11 08:23:41.908899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-06-11 08:23:41.909164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-06-11 08:23:41.909491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-06-11 08:23:41.909518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-06-11 08:23:41.909841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-06-11 08:23:41.910162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-06-11 08:23:41.910187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-06-11 08:23:41.910556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-06-11 08:23:41.910926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-06-11 08:23:41.910952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.471 [2024-06-11 08:23:41.911293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-06-11 08:23:41.911495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.471 [2024-06-11 08:23:41.911525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.471 qpair failed and we were unable to recover it. 00:31:11.472 [2024-06-11 08:23:41.911877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-06-11 08:23:41.912220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-06-11 08:23:41.912245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-06-11 08:23:41.912558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-06-11 08:23:41.912887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-06-11 08:23:41.912913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-06-11 08:23:41.913291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-06-11 08:23:41.913579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-06-11 08:23:41.913607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-06-11 08:23:41.913933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-06-11 08:23:41.914258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-06-11 08:23:41.914285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-06-11 08:23:41.914639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-06-11 08:23:41.914866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.472 [2024-06-11 08:23:41.914895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.472 qpair failed and we were unable to recover it. 00:31:11.472 [2024-06-11 08:23:41.915284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-06-11 08:23:41.915642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-06-11 08:23:41.915668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-06-11 08:23:41.916029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-06-11 08:23:41.916384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-06-11 08:23:41.916410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-06-11 08:23:41.916757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-06-11 08:23:41.917110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-06-11 08:23:41.917136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-06-11 08:23:41.917484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-06-11 08:23:41.917737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-06-11 08:23:41.917762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-06-11 08:23:41.918130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-06-11 08:23:41.918450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-06-11 08:23:41.918476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.473 [2024-06-11 08:23:41.918857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-06-11 08:23:41.919212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.473 [2024-06-11 08:23:41.919238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.473 qpair failed and we were unable to recover it. 00:31:11.474 [2024-06-11 08:23:41.919593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.474 [2024-06-11 08:23:41.919941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.474 [2024-06-11 08:23:41.919967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.474 qpair failed and we were unable to recover it. 00:31:11.474 [2024-06-11 08:23:41.920302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.474 [2024-06-11 08:23:41.920630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.474 [2024-06-11 08:23:41.920657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.474 qpair failed and we were unable to recover it. 00:31:11.474 [2024-06-11 08:23:41.920902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.474 [2024-06-11 08:23:41.921126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.474 [2024-06-11 08:23:41.921152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.474 qpair failed and we were unable to recover it. 00:31:11.474 [2024-06-11 08:23:41.921486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.474 [2024-06-11 08:23:41.921670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.474 [2024-06-11 08:23:41.921696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.474 qpair failed and we were unable to recover it. 00:31:11.474 [2024-06-11 08:23:41.922102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.474 [2024-06-11 08:23:41.922413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.476 [2024-06-11 08:23:41.922446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.476 qpair failed and we were unable to recover it. 00:31:11.476 [2024-06-11 08:23:41.922796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.477 [2024-06-11 08:23:41.923130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.477 [2024-06-11 08:23:41.923161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.477 qpair failed and we were unable to recover it. 00:31:11.477 [2024-06-11 08:23:41.923326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.477 [2024-06-11 08:23:41.923576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.477 [2024-06-11 08:23:41.923605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.477 qpair failed and we were unable to recover it. 00:31:11.477 [2024-06-11 08:23:41.923952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.477 [2024-06-11 08:23:41.924194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.477 [2024-06-11 08:23:41.924219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.477 qpair failed and we were unable to recover it. 00:31:11.477 [2024-06-11 08:23:41.924552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.477 [2024-06-11 08:23:41.924883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.477 [2024-06-11 08:23:41.924909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.477 qpair failed and we were unable to recover it. 00:31:11.477 [2024-06-11 08:23:41.925084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.477 [2024-06-11 08:23:41.925457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.477 [2024-06-11 08:23:41.925484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.477 qpair failed and we were unable to recover it. 00:31:11.477 [2024-06-11 08:23:41.925814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.477 [2024-06-11 08:23:41.926048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.477 [2024-06-11 08:23:41.926073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.477 qpair failed and we were unable to recover it. 00:31:11.477 [2024-06-11 08:23:41.926454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.478 [2024-06-11 08:23:41.926770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.478 [2024-06-11 08:23:41.926795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.478 qpair failed and we were unable to recover it. 00:31:11.478 [2024-06-11 08:23:41.927150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.478 [2024-06-11 08:23:41.927462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.478 [2024-06-11 08:23:41.927490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.478 qpair failed and we were unable to recover it. 00:31:11.478 [2024-06-11 08:23:41.927832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.478 [2024-06-11 08:23:41.928195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.478 [2024-06-11 08:23:41.928221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.478 qpair failed and we were unable to recover it. 00:31:11.478 [2024-06-11 08:23:41.928505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.478 [2024-06-11 08:23:41.928870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.478 [2024-06-11 08:23:41.928896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.478 qpair failed and we were unable to recover it. 00:31:11.478 [2024-06-11 08:23:41.929204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.478 [2024-06-11 08:23:41.929424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.478 [2024-06-11 08:23:41.929463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.478 qpair failed and we were unable to recover it. 00:31:11.478 [2024-06-11 08:23:41.929775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.478 [2024-06-11 08:23:41.930101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.478 [2024-06-11 08:23:41.930127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.478 qpair failed and we were unable to recover it. 00:31:11.478 [2024-06-11 08:23:41.930310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.478 [2024-06-11 08:23:41.930653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-06-11 08:23:41.930681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.479 qpair failed and we were unable to recover it. 00:31:11.479 [2024-06-11 08:23:41.931022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-06-11 08:23:41.931372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-06-11 08:23:41.931397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.479 qpair failed and we were unable to recover it. 00:31:11.479 [2024-06-11 08:23:41.931753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-06-11 08:23:41.932067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-06-11 08:23:41.932092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.479 qpair failed and we were unable to recover it. 00:31:11.479 [2024-06-11 08:23:41.932464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-06-11 08:23:41.932696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-06-11 08:23:41.932725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.479 qpair failed and we were unable to recover it. 00:31:11.479 [2024-06-11 08:23:41.933095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-06-11 08:23:41.933436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-06-11 08:23:41.933486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.479 qpair failed and we were unable to recover it. 00:31:11.479 [2024-06-11 08:23:41.933879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-06-11 08:23:41.934238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.479 [2024-06-11 08:23:41.934264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.479 qpair failed and we were unable to recover it. 00:31:11.479 [2024-06-11 08:23:41.934618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-06-11 08:23:41.934951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-06-11 08:23:41.934976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.480 qpair failed and we were unable to recover it. 00:31:11.480 [2024-06-11 08:23:41.935299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-06-11 08:23:41.935653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-06-11 08:23:41.935680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.480 qpair failed and we were unable to recover it. 00:31:11.480 [2024-06-11 08:23:41.936001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-06-11 08:23:41.936234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-06-11 08:23:41.936268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.480 qpair failed and we were unable to recover it. 00:31:11.480 [2024-06-11 08:23:41.936625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-06-11 08:23:41.936971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-06-11 08:23:41.936997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.480 qpair failed and we were unable to recover it. 00:31:11.480 [2024-06-11 08:23:41.937404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-06-11 08:23:41.937752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-06-11 08:23:41.937779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.480 qpair failed and we were unable to recover it. 00:31:11.480 [2024-06-11 08:23:41.938192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-06-11 08:23:41.938507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-06-11 08:23:41.938534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.480 qpair failed and we were unable to recover it. 00:31:11.480 [2024-06-11 08:23:41.938891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-06-11 08:23:41.939209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-06-11 08:23:41.939235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.480 qpair failed and we were unable to recover it. 00:31:11.480 [2024-06-11 08:23:41.939636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.480 [2024-06-11 08:23:41.939991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.481 [2024-06-11 08:23:41.940016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.481 qpair failed and we were unable to recover it. 00:31:11.481 [2024-06-11 08:23:41.940352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.481 [2024-06-11 08:23:41.940579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.481 [2024-06-11 08:23:41.940606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.481 qpair failed and we were unable to recover it. 00:31:11.481 [2024-06-11 08:23:41.940876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.481 [2024-06-11 08:23:41.941242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.481 [2024-06-11 08:23:41.941268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.481 qpair failed and we were unable to recover it. 00:31:11.481 [2024-06-11 08:23:41.941598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.481 [2024-06-11 08:23:41.941954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.481 [2024-06-11 08:23:41.941979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.481 qpair failed and we were unable to recover it. 00:31:11.481 [2024-06-11 08:23:41.942333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.481 [2024-06-11 08:23:41.942633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.481 [2024-06-11 08:23:41.942658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.481 qpair failed and we were unable to recover it. 00:31:11.481 [2024-06-11 08:23:41.943019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.481 [2024-06-11 08:23:41.943369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.481 [2024-06-11 08:23:41.943399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.481 qpair failed and we were unable to recover it. 00:31:11.481 [2024-06-11 08:23:41.943756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.481 [2024-06-11 08:23:41.944072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.482 [2024-06-11 08:23:41.944098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.482 qpair failed and we were unable to recover it. 00:31:11.482 [2024-06-11 08:23:41.944466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.482 [2024-06-11 08:23:41.944738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.482 [2024-06-11 08:23:41.944764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.482 qpair failed and we were unable to recover it. 00:31:11.482 [2024-06-11 08:23:41.945079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.482 [2024-06-11 08:23:41.945407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.482 [2024-06-11 08:23:41.945432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.482 qpair failed and we were unable to recover it. 00:31:11.482 [2024-06-11 08:23:41.945788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.482 [2024-06-11 08:23:41.946104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.482 [2024-06-11 08:23:41.946129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.482 qpair failed and we were unable to recover it. 00:31:11.482 [2024-06-11 08:23:41.946532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.482 [2024-06-11 08:23:41.946851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.482 [2024-06-11 08:23:41.946877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.482 qpair failed and we were unable to recover it. 00:31:11.482 [2024-06-11 08:23:41.947197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.482 [2024-06-11 08:23:41.947529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.482 [2024-06-11 08:23:41.947556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.482 qpair failed and we were unable to recover it. 00:31:11.482 [2024-06-11 08:23:41.947901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.482 [2024-06-11 08:23:41.948189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.482 [2024-06-11 08:23:41.948214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.482 qpair failed and we were unable to recover it. 00:31:11.482 [2024-06-11 08:23:41.948455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.482 [2024-06-11 08:23:41.948776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.482 [2024-06-11 08:23:41.948802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.482 qpair failed and we were unable to recover it. 00:31:11.482 [2024-06-11 08:23:41.949148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.482 [2024-06-11 08:23:41.949502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.483 [2024-06-11 08:23:41.949530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.483 qpair failed and we were unable to recover it. 00:31:11.483 [2024-06-11 08:23:41.949861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.483 [2024-06-11 08:23:41.950207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.483 [2024-06-11 08:23:41.950233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.483 qpair failed and we were unable to recover it. 00:31:11.483 [2024-06-11 08:23:41.950549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.483 [2024-06-11 08:23:41.950912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.483 [2024-06-11 08:23:41.950937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.483 qpair failed and we were unable to recover it. 00:31:11.483 [2024-06-11 08:23:41.951267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.483 [2024-06-11 08:23:41.951612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.483 [2024-06-11 08:23:41.951638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.483 qpair failed and we were unable to recover it. 00:31:11.483 [2024-06-11 08:23:41.951992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.483 [2024-06-11 08:23:41.952337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.483 [2024-06-11 08:23:41.952362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.483 qpair failed and we were unable to recover it. 00:31:11.483 [2024-06-11 08:23:41.952716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.483 [2024-06-11 08:23:41.952932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.483 [2024-06-11 08:23:41.952962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.483 qpair failed and we were unable to recover it. 00:31:11.483 [2024-06-11 08:23:41.953319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.483 [2024-06-11 08:23:41.953680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.483 [2024-06-11 08:23:41.953708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.483 qpair failed and we were unable to recover it. 00:31:11.483 [2024-06-11 08:23:41.954039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.483 [2024-06-11 08:23:41.954422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.483 [2024-06-11 08:23:41.954456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.484 qpair failed and we were unable to recover it. 00:31:11.484 [2024-06-11 08:23:41.954791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.484 [2024-06-11 08:23:41.955102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.484 [2024-06-11 08:23:41.955127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.484 qpair failed and we were unable to recover it. 00:31:11.484 [2024-06-11 08:23:41.955478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.484 [2024-06-11 08:23:41.955783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.484 [2024-06-11 08:23:41.955808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.484 qpair failed and we were unable to recover it. 00:31:11.484 [2024-06-11 08:23:41.956158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.484 [2024-06-11 08:23:41.956482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.484 [2024-06-11 08:23:41.956508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.484 qpair failed and we were unable to recover it. 00:31:11.484 [2024-06-11 08:23:41.956859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.484 [2024-06-11 08:23:41.957184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.484 [2024-06-11 08:23:41.957211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.484 qpair failed and we were unable to recover it. 00:31:11.484 [2024-06-11 08:23:41.957547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.484 [2024-06-11 08:23:41.957765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.484 [2024-06-11 08:23:41.957793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.484 qpair failed and we were unable to recover it. 00:31:11.484 [2024-06-11 08:23:41.958134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.484 [2024-06-11 08:23:41.958367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.484 [2024-06-11 08:23:41.958393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.484 qpair failed and we were unable to recover it. 00:31:11.484 [2024-06-11 08:23:41.958750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.484 [2024-06-11 08:23:41.959025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.485 [2024-06-11 08:23:41.959051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.485 qpair failed and we were unable to recover it. 00:31:11.485 [2024-06-11 08:23:41.959380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.485 [2024-06-11 08:23:41.959702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.485 [2024-06-11 08:23:41.959730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.485 qpair failed and we were unable to recover it. 00:31:11.485 [2024-06-11 08:23:41.960102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.485 [2024-06-11 08:23:41.960460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.485 [2024-06-11 08:23:41.960488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.485 qpair failed and we were unable to recover it. 00:31:11.485 [2024-06-11 08:23:41.960821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.485 [2024-06-11 08:23:41.961146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.485 [2024-06-11 08:23:41.961173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.485 qpair failed and we were unable to recover it. 00:31:11.485 [2024-06-11 08:23:41.961517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.485 [2024-06-11 08:23:41.961842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.485 [2024-06-11 08:23:41.961868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.485 qpair failed and we were unable to recover it. 00:31:11.485 [2024-06-11 08:23:41.962217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.485 [2024-06-11 08:23:41.962561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.485 [2024-06-11 08:23:41.962588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.485 qpair failed and we were unable to recover it. 00:31:11.486 [2024-06-11 08:23:41.962941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.486 [2024-06-11 08:23:41.963299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.486 [2024-06-11 08:23:41.963325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.486 qpair failed and we were unable to recover it. 00:31:11.486 [2024-06-11 08:23:41.963721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.486 [2024-06-11 08:23:41.963924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.486 [2024-06-11 08:23:41.963952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.486 qpair failed and we were unable to recover it. 00:31:11.486 [2024-06-11 08:23:41.964293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.486 [2024-06-11 08:23:41.964612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.486 [2024-06-11 08:23:41.964639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.486 qpair failed and we were unable to recover it. 00:31:11.486 [2024-06-11 08:23:41.964993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.486 [2024-06-11 08:23:41.965341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.486 [2024-06-11 08:23:41.965366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.486 qpair failed and we were unable to recover it. 00:31:11.486 [2024-06-11 08:23:41.965784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.486 [2024-06-11 08:23:41.966096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.486 [2024-06-11 08:23:41.966121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.486 qpair failed and we were unable to recover it. 00:31:11.486 [2024-06-11 08:23:41.966481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.486 [2024-06-11 08:23:41.966739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.486 [2024-06-11 08:23:41.966766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.486 qpair failed and we were unable to recover it. 00:31:11.486 [2024-06-11 08:23:41.967092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.486 [2024-06-11 08:23:41.967400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.486 [2024-06-11 08:23:41.967426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.486 qpair failed and we were unable to recover it. 00:31:11.486 [2024-06-11 08:23:41.967780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.486 [2024-06-11 08:23:41.968187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.486 [2024-06-11 08:23:41.968214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.486 qpair failed and we were unable to recover it. 00:31:11.486 [2024-06-11 08:23:41.968568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.486 [2024-06-11 08:23:41.968909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.486 [2024-06-11 08:23:41.968935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.486 qpair failed and we were unable to recover it. 00:31:11.486 [2024-06-11 08:23:41.969287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.486 [2024-06-11 08:23:41.969602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.486 [2024-06-11 08:23:41.969629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.486 qpair failed and we were unable to recover it. 00:31:11.486 [2024-06-11 08:23:41.969864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.970191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.970216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.487 qpair failed and we were unable to recover it. 00:31:11.487 [2024-06-11 08:23:41.970605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.970940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.970966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.487 qpair failed and we were unable to recover it. 00:31:11.487 [2024-06-11 08:23:41.971327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.971684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.971712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.487 qpair failed and we were unable to recover it. 00:31:11.487 [2024-06-11 08:23:41.972078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.972414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.972448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.487 qpair failed and we were unable to recover it. 00:31:11.487 [2024-06-11 08:23:41.972836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.973147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.973173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.487 qpair failed and we were unable to recover it. 00:31:11.487 [2024-06-11 08:23:41.973503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.973851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.973876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.487 qpair failed and we were unable to recover it. 00:31:11.487 [2024-06-11 08:23:41.974219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.974432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.974470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.487 qpair failed and we were unable to recover it. 00:31:11.487 [2024-06-11 08:23:41.974896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.975206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.975232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.487 qpair failed and we were unable to recover it. 00:31:11.487 [2024-06-11 08:23:41.975574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.975937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.975962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.487 qpair failed and we were unable to recover it. 00:31:11.487 [2024-06-11 08:23:41.976293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.977877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.977928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.487 qpair failed and we were unable to recover it. 00:31:11.487 [2024-06-11 08:23:41.978300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.978691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.978719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.487 qpair failed and we were unable to recover it. 00:31:11.487 [2024-06-11 08:23:41.979045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.979361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.979387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.487 qpair failed and we were unable to recover it. 00:31:11.487 [2024-06-11 08:23:41.979766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.980097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.980124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.487 qpair failed and we were unable to recover it. 00:31:11.487 [2024-06-11 08:23:41.980455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.980785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.980812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.487 qpair failed and we were unable to recover it. 00:31:11.487 [2024-06-11 08:23:41.981173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.981526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.487 [2024-06-11 08:23:41.981553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.488 qpair failed and we were unable to recover it. 00:31:11.488 [2024-06-11 08:23:41.981817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.488 [2024-06-11 08:23:41.982222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.488 [2024-06-11 08:23:41.982247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.488 qpair failed and we were unable to recover it. 00:31:11.488 [2024-06-11 08:23:41.982583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.488 [2024-06-11 08:23:41.982931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.488 [2024-06-11 08:23:41.982958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.488 qpair failed and we were unable to recover it. 00:31:11.488 [2024-06-11 08:23:41.983300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.488 [2024-06-11 08:23:41.983645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.488 [2024-06-11 08:23:41.983671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.488 qpair failed and we were unable to recover it. 00:31:11.488 [2024-06-11 08:23:41.983974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.488 [2024-06-11 08:23:41.984313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.488 [2024-06-11 08:23:41.984339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.488 qpair failed and we were unable to recover it. 00:31:11.488 [2024-06-11 08:23:41.984666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.488 [2024-06-11 08:23:41.985056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.488 [2024-06-11 08:23:41.985082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.488 qpair failed and we were unable to recover it. 00:31:11.488 [2024-06-11 08:23:41.985302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.488 [2024-06-11 08:23:41.985655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.488 [2024-06-11 08:23:41.985681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.488 qpair failed and we were unable to recover it. 00:31:11.488 [2024-06-11 08:23:41.986026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.488 [2024-06-11 08:23:41.986369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.488 [2024-06-11 08:23:41.986395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.488 qpair failed and we were unable to recover it. 00:31:11.488 [2024-06-11 08:23:41.986754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.489 [2024-06-11 08:23:41.987064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.489 [2024-06-11 08:23:41.987090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.489 qpair failed and we were unable to recover it. 00:31:11.489 [2024-06-11 08:23:41.987335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.489 [2024-06-11 08:23:41.987662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.489 [2024-06-11 08:23:41.987689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.489 qpair failed and we were unable to recover it. 00:31:11.489 [2024-06-11 08:23:41.988043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.489 [2024-06-11 08:23:41.988363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.489 [2024-06-11 08:23:41.988389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.489 qpair failed and we were unable to recover it. 00:31:11.489 [2024-06-11 08:23:41.988557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.489 [2024-06-11 08:23:41.988823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.489 [2024-06-11 08:23:41.988849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.489 qpair failed and we were unable to recover it. 00:31:11.489 [2024-06-11 08:23:41.989089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.489 [2024-06-11 08:23:41.989408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.489 [2024-06-11 08:23:41.989435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.489 qpair failed and we were unable to recover it. 00:31:11.489 [2024-06-11 08:23:41.989730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.489 [2024-06-11 08:23:41.990063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.489 [2024-06-11 08:23:41.990089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.489 qpair failed and we were unable to recover it. 00:31:11.490 [2024-06-11 08:23:41.990431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.490 [2024-06-11 08:23:41.990789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.490 [2024-06-11 08:23:41.990815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.490 qpair failed and we were unable to recover it. 00:31:11.490 [2024-06-11 08:23:41.991058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.490 [2024-06-11 08:23:41.991283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.490 [2024-06-11 08:23:41.991309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.490 qpair failed and we were unable to recover it. 00:31:11.490 [2024-06-11 08:23:41.991654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.490 [2024-06-11 08:23:41.992005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.490 [2024-06-11 08:23:41.992033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.490 qpair failed and we were unable to recover it. 00:31:11.490 [2024-06-11 08:23:41.992365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.490 [2024-06-11 08:23:41.992683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.490 [2024-06-11 08:23:41.992710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.490 qpair failed and we were unable to recover it. 00:31:11.490 [2024-06-11 08:23:41.992936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.491 [2024-06-11 08:23:41.993266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.491 [2024-06-11 08:23:41.993293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.491 qpair failed and we were unable to recover it. 00:31:11.491 [2024-06-11 08:23:41.993640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.491 [2024-06-11 08:23:41.993987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.491 [2024-06-11 08:23:41.994013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.491 qpair failed and we were unable to recover it. 00:31:11.491 [2024-06-11 08:23:41.994402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.491 [2024-06-11 08:23:41.994768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.491 [2024-06-11 08:23:41.994798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.491 qpair failed and we were unable to recover it. 00:31:11.491 [2024-06-11 08:23:41.995161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.491 [2024-06-11 08:23:41.995505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.491 [2024-06-11 08:23:41.995533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.491 qpair failed and we were unable to recover it. 00:31:11.491 [2024-06-11 08:23:41.995908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.491 [2024-06-11 08:23:41.996312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.491 [2024-06-11 08:23:41.996337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.491 qpair failed and we were unable to recover it. 00:31:11.491 [2024-06-11 08:23:41.996722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.491 [2024-06-11 08:23:41.997062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.491 [2024-06-11 08:23:41.997088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.491 qpair failed and we were unable to recover it. 00:31:11.491 [2024-06-11 08:23:41.997408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.491 [2024-06-11 08:23:41.997741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.491 [2024-06-11 08:23:41.997767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.491 qpair failed and we were unable to recover it. 00:31:11.491 [2024-06-11 08:23:41.998085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.491 [2024-06-11 08:23:41.998395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.491 [2024-06-11 08:23:41.998421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.492 qpair failed and we were unable to recover it. 00:31:11.492 [2024-06-11 08:23:41.998766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.492 [2024-06-11 08:23:41.999116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.492 [2024-06-11 08:23:41.999142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.492 qpair failed and we were unable to recover it. 00:31:11.492 [2024-06-11 08:23:41.999492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.492 [2024-06-11 08:23:41.999804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.492 [2024-06-11 08:23:41.999838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.492 qpair failed and we were unable to recover it. 00:31:11.492 [2024-06-11 08:23:42.000199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.492 [2024-06-11 08:23:42.000434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.492 [2024-06-11 08:23:42.000470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.492 qpair failed and we were unable to recover it. 00:31:11.492 [2024-06-11 08:23:42.000833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.492 [2024-06-11 08:23:42.001158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.492 [2024-06-11 08:23:42.001184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.492 qpair failed and we were unable to recover it. 00:31:11.492 [2024-06-11 08:23:42.001540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.492 [2024-06-11 08:23:42.001881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.492 [2024-06-11 08:23:42.001907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.492 qpair failed and we were unable to recover it. 00:31:11.493 [2024-06-11 08:23:42.002148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.493 [2024-06-11 08:23:42.002384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.493 [2024-06-11 08:23:42.002413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.493 qpair failed and we were unable to recover it. 00:31:11.493 [2024-06-11 08:23:42.002775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.493 [2024-06-11 08:23:42.003127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.493 [2024-06-11 08:23:42.003154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.493 qpair failed and we were unable to recover it. 00:31:11.493 [2024-06-11 08:23:42.003375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.493 [2024-06-11 08:23:42.003705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.493 [2024-06-11 08:23:42.003733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.493 qpair failed and we were unable to recover it. 00:31:11.493 [2024-06-11 08:23:42.004074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.493 [2024-06-11 08:23:42.004465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.493 [2024-06-11 08:23:42.004492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.493 qpair failed and we were unable to recover it. 00:31:11.493 [2024-06-11 08:23:42.004868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.493 [2024-06-11 08:23:42.005183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.493 [2024-06-11 08:23:42.005209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.493 qpair failed and we were unable to recover it. 00:31:11.493 [2024-06-11 08:23:42.005570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.493 [2024-06-11 08:23:42.005884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.493 [2024-06-11 08:23:42.005910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.493 qpair failed and we were unable to recover it. 00:31:11.493 [2024-06-11 08:23:42.006151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.493 [2024-06-11 08:23:42.006428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.494 [2024-06-11 08:23:42.006463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.494 qpair failed and we were unable to recover it. 00:31:11.494 [2024-06-11 08:23:42.006824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.494 [2024-06-11 08:23:42.007173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.494 [2024-06-11 08:23:42.007200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.494 qpair failed and we were unable to recover it. 00:31:11.494 [2024-06-11 08:23:42.007580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.494 [2024-06-11 08:23:42.007933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.494 [2024-06-11 08:23:42.007959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.494 qpair failed and we were unable to recover it. 00:31:11.494 [2024-06-11 08:23:42.008323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.494 [2024-06-11 08:23:42.008669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.494 [2024-06-11 08:23:42.008696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.494 qpair failed and we were unable to recover it. 00:31:11.494 [2024-06-11 08:23:42.009040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.494 [2024-06-11 08:23:42.009399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.494 [2024-06-11 08:23:42.009425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.494 qpair failed and we were unable to recover it. 00:31:11.495 [2024-06-11 08:23:42.009773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.495 [2024-06-11 08:23:42.010102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.495 [2024-06-11 08:23:42.010128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.495 qpair failed and we were unable to recover it. 00:31:11.495 [2024-06-11 08:23:42.010495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.495 [2024-06-11 08:23:42.010870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.495 [2024-06-11 08:23:42.010896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.500 qpair failed and we were unable to recover it. 00:31:11.500 [2024-06-11 08:23:42.011195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.500 [2024-06-11 08:23:42.011413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.500 [2024-06-11 08:23:42.011460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.500 qpair failed and we were unable to recover it. 00:31:11.500 [2024-06-11 08:23:42.011782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.500 [2024-06-11 08:23:42.012170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.500 [2024-06-11 08:23:42.012197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.500 qpair failed and we were unable to recover it. 00:31:11.500 [2024-06-11 08:23:42.012564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.500 [2024-06-11 08:23:42.014517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.500 [2024-06-11 08:23:42.014572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.500 qpair failed and we were unable to recover it. 00:31:11.500 [2024-06-11 08:23:42.014925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.500 [2024-06-11 08:23:42.015282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.500 [2024-06-11 08:23:42.015309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.501 qpair failed and we were unable to recover it. 00:31:11.501 [2024-06-11 08:23:42.015682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.501 [2024-06-11 08:23:42.016034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.501 [2024-06-11 08:23:42.016061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.501 qpair failed and we were unable to recover it. 00:31:11.501 [2024-06-11 08:23:42.016411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.501 [2024-06-11 08:23:42.016727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.501 [2024-06-11 08:23:42.016755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.501 qpair failed and we were unable to recover it. 00:31:11.501 [2024-06-11 08:23:42.017120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.501 [2024-06-11 08:23:42.018570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.501 [2024-06-11 08:23:42.018618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.501 qpair failed and we were unable to recover it. 00:31:11.501 [2024-06-11 08:23:42.018978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.501 [2024-06-11 08:23:42.019325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.501 [2024-06-11 08:23:42.019352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.501 qpair failed and we were unable to recover it. 00:31:11.501 [2024-06-11 08:23:42.019703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.501 [2024-06-11 08:23:42.020023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.501 [2024-06-11 08:23:42.020049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.501 qpair failed and we were unable to recover it. 00:31:11.501 [2024-06-11 08:23:42.020379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.501 [2024-06-11 08:23:42.020726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.501 [2024-06-11 08:23:42.020753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.501 qpair failed and we were unable to recover it. 00:31:11.501 [2024-06-11 08:23:42.021113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.501 [2024-06-11 08:23:42.021420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.501 [2024-06-11 08:23:42.021456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.501 qpair failed and we were unable to recover it. 00:31:11.501 [2024-06-11 08:23:42.021671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.501 [2024-06-11 08:23:42.021983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.501 [2024-06-11 08:23:42.022009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.501 qpair failed and we were unable to recover it. 00:31:11.501 [2024-06-11 08:23:42.022356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.501 [2024-06-11 08:23:42.022583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.501 [2024-06-11 08:23:42.022612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.502 qpair failed and we were unable to recover it. 00:31:11.502 [2024-06-11 08:23:42.022953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.502 [2024-06-11 08:23:42.023279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.502 [2024-06-11 08:23:42.023305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.502 qpair failed and we were unable to recover it. 00:31:11.502 [2024-06-11 08:23:42.023657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.502 [2024-06-11 08:23:42.024003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.502 [2024-06-11 08:23:42.024029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.502 qpair failed and we were unable to recover it. 00:31:11.502 [2024-06-11 08:23:42.024289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.502 [2024-06-11 08:23:42.024637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.502 [2024-06-11 08:23:42.024665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.502 qpair failed and we were unable to recover it. 00:31:11.502 [2024-06-11 08:23:42.025025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.502 [2024-06-11 08:23:42.025362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.502 [2024-06-11 08:23:42.025390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.502 qpair failed and we were unable to recover it. 00:31:11.502 [2024-06-11 08:23:42.025697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.502 [2024-06-11 08:23:42.026095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.502 [2024-06-11 08:23:42.026122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.502 qpair failed and we were unable to recover it. 00:31:11.502 [2024-06-11 08:23:42.026484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.502 [2024-06-11 08:23:42.026715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.502 [2024-06-11 08:23:42.026744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.502 qpair failed and we were unable to recover it. 00:31:11.503 [2024-06-11 08:23:42.027081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.503 [2024-06-11 08:23:42.027424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.503 [2024-06-11 08:23:42.027469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.503 qpair failed and we were unable to recover it. 00:31:11.503 [2024-06-11 08:23:42.027757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.503 [2024-06-11 08:23:42.028113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.503 [2024-06-11 08:23:42.028142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.503 qpair failed and we were unable to recover it. 00:31:11.503 [2024-06-11 08:23:42.028463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.503 [2024-06-11 08:23:42.028720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.503 [2024-06-11 08:23:42.028747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.503 qpair failed and we were unable to recover it. 00:31:11.503 [2024-06-11 08:23:42.028979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.503 [2024-06-11 08:23:42.029318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.503 [2024-06-11 08:23:42.029344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.503 qpair failed and we were unable to recover it. 00:31:11.503 [2024-06-11 08:23:42.029647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.503 [2024-06-11 08:23:42.029988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.503 [2024-06-11 08:23:42.030014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.503 qpair failed and we were unable to recover it. 00:31:11.503 [2024-06-11 08:23:42.030303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.503 [2024-06-11 08:23:42.030640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.503 [2024-06-11 08:23:42.030668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.503 qpair failed and we were unable to recover it. 00:31:11.503 [2024-06-11 08:23:42.031037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.503 [2024-06-11 08:23:42.031397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.503 [2024-06-11 08:23:42.031424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.503 qpair failed and we were unable to recover it. 00:31:11.504 [2024-06-11 08:23:42.031787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.504 [2024-06-11 08:23:42.032109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.504 [2024-06-11 08:23:42.032138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.504 qpair failed and we were unable to recover it. 00:31:11.504 [2024-06-11 08:23:42.032484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.504 [2024-06-11 08:23:42.032750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.504 [2024-06-11 08:23:42.032776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.504 qpair failed and we were unable to recover it. 00:31:11.504 [2024-06-11 08:23:42.033112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.504 [2024-06-11 08:23:42.033358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.504 [2024-06-11 08:23:42.033384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.504 qpair failed and we were unable to recover it. 00:31:11.504 [2024-06-11 08:23:42.033634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.504 [2024-06-11 08:23:42.033978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.504 [2024-06-11 08:23:42.034004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.504 qpair failed and we were unable to recover it. 00:31:11.504 [2024-06-11 08:23:42.034448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.504 [2024-06-11 08:23:42.034784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.504 [2024-06-11 08:23:42.034810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.504 qpair failed and we were unable to recover it. 00:31:11.504 [2024-06-11 08:23:42.035237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.504 [2024-06-11 08:23:42.035492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.504 [2024-06-11 08:23:42.035521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.504 qpair failed and we were unable to recover it. 00:31:11.504 [2024-06-11 08:23:42.035770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.504 [2024-06-11 08:23:42.036113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.504 [2024-06-11 08:23:42.036140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.504 qpair failed and we were unable to recover it. 00:31:11.504 [2024-06-11 08:23:42.036490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.504 [2024-06-11 08:23:42.036701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.504 [2024-06-11 08:23:42.036726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.505 qpair failed and we were unable to recover it. 00:31:11.505 [2024-06-11 08:23:42.037076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.505 [2024-06-11 08:23:42.037391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.505 [2024-06-11 08:23:42.037422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.505 qpair failed and we were unable to recover it. 00:31:11.505 [2024-06-11 08:23:42.037788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.505 [2024-06-11 08:23:42.038141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.505 [2024-06-11 08:23:42.038168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.505 qpair failed and we were unable to recover it. 00:31:11.505 [2024-06-11 08:23:42.038515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.505 [2024-06-11 08:23:42.038851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.505 [2024-06-11 08:23:42.038877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.505 qpair failed and we were unable to recover it. 00:31:11.505 [2024-06-11 08:23:42.039172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.505 [2024-06-11 08:23:42.039500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.505 [2024-06-11 08:23:42.039548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.505 qpair failed and we were unable to recover it. 00:31:11.505 [2024-06-11 08:23:42.039784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.505 [2024-06-11 08:23:42.040127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.505 [2024-06-11 08:23:42.040153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.505 qpair failed and we were unable to recover it. 00:31:11.505 [2024-06-11 08:23:42.040478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.505 [2024-06-11 08:23:42.040783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.505 [2024-06-11 08:23:42.040809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.505 qpair failed and we were unable to recover it. 00:31:11.505 [2024-06-11 08:23:42.041035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.505 [2024-06-11 08:23:42.041252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.505 [2024-06-11 08:23:42.041278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.505 qpair failed and we were unable to recover it. 00:31:11.505 [2024-06-11 08:23:42.041631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.505 [2024-06-11 08:23:42.041934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.506 [2024-06-11 08:23:42.041960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.506 qpair failed and we were unable to recover it. 00:31:11.506 [2024-06-11 08:23:42.042271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.506 [2024-06-11 08:23:42.042479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.506 [2024-06-11 08:23:42.042506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.506 qpair failed and we were unable to recover it. 00:31:11.506 [2024-06-11 08:23:42.042838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.506 [2024-06-11 08:23:42.043134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.506 [2024-06-11 08:23:42.043160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.506 qpair failed and we were unable to recover it. 00:31:11.506 [2024-06-11 08:23:42.043495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.506 [2024-06-11 08:23:42.043760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.506 [2024-06-11 08:23:42.043791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.506 qpair failed and we were unable to recover it. 00:31:11.506 [2024-06-11 08:23:42.044111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.506 [2024-06-11 08:23:42.044425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.506 [2024-06-11 08:23:42.044474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.506 qpair failed and we were unable to recover it. 00:31:11.506 [2024-06-11 08:23:42.044782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.506 [2024-06-11 08:23:42.045111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.506 [2024-06-11 08:23:42.045137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.506 qpair failed and we were unable to recover it. 00:31:11.506 [2024-06-11 08:23:42.045277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.506 [2024-06-11 08:23:42.045638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.506 [2024-06-11 08:23:42.045666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.506 qpair failed and we were unable to recover it. 00:31:11.506 [2024-06-11 08:23:42.045992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.506 [2024-06-11 08:23:42.046346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.506 [2024-06-11 08:23:42.046371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.506 qpair failed and we were unable to recover it. 00:31:11.507 [2024-06-11 08:23:42.046605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.507 [2024-06-11 08:23:42.046948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.507 [2024-06-11 08:23:42.046974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.507 qpair failed and we were unable to recover it. 00:31:11.507 [2024-06-11 08:23:42.047246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.507 [2024-06-11 08:23:42.047559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.507 [2024-06-11 08:23:42.047586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.507 qpair failed and we were unable to recover it. 00:31:11.507 [2024-06-11 08:23:42.047840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.507 [2024-06-11 08:23:42.048136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.507 [2024-06-11 08:23:42.048162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.507 qpair failed and we were unable to recover it. 00:31:11.507 [2024-06-11 08:23:42.048411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.507 [2024-06-11 08:23:42.048769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.507 [2024-06-11 08:23:42.048797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.507 qpair failed and we were unable to recover it. 00:31:11.507 [2024-06-11 08:23:42.049189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.507 [2024-06-11 08:23:42.049511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.507 [2024-06-11 08:23:42.049539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.507 qpair failed and we were unable to recover it. 00:31:11.507 [2024-06-11 08:23:42.049902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.507 [2024-06-11 08:23:42.050126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.507 [2024-06-11 08:23:42.050157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.507 qpair failed and we were unable to recover it. 00:31:11.507 [2024-06-11 08:23:42.050488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.507 [2024-06-11 08:23:42.050908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.508 [2024-06-11 08:23:42.050934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.508 qpair failed and we were unable to recover it. 00:31:11.508 [2024-06-11 08:23:42.051190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.508 [2024-06-11 08:23:42.051434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.508 [2024-06-11 08:23:42.051468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.508 qpair failed and we were unable to recover it. 00:31:11.508 [2024-06-11 08:23:42.051707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.508 [2024-06-11 08:23:42.051960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.508 [2024-06-11 08:23:42.051986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.508 qpair failed and we were unable to recover it. 00:31:11.508 [2024-06-11 08:23:42.052227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.508 [2024-06-11 08:23:42.052571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.508 [2024-06-11 08:23:42.052598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.508 qpair failed and we were unable to recover it. 00:31:11.508 [2024-06-11 08:23:42.052922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.508 [2024-06-11 08:23:42.053269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.508 [2024-06-11 08:23:42.053294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.508 qpair failed and we were unable to recover it. 00:31:11.508 [2024-06-11 08:23:42.053642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.508 [2024-06-11 08:23:42.053973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.508 [2024-06-11 08:23:42.053999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.508 qpair failed and we were unable to recover it. 00:31:11.508 [2024-06-11 08:23:42.054342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.508 [2024-06-11 08:23:42.054665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.508 [2024-06-11 08:23:42.054692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.508 qpair failed and we were unable to recover it. 00:31:11.508 [2024-06-11 08:23:42.055037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.508 [2024-06-11 08:23:42.055394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.508 [2024-06-11 08:23:42.055420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.508 qpair failed and we were unable to recover it. 00:31:11.508 [2024-06-11 08:23:42.055868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.508 [2024-06-11 08:23:42.056116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.508 [2024-06-11 08:23:42.056142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.508 qpair failed and we were unable to recover it. 00:31:11.509 [2024-06-11 08:23:42.056498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.509 [2024-06-11 08:23:42.056730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.509 [2024-06-11 08:23:42.056763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.509 qpair failed and we were unable to recover it. 00:31:11.509 [2024-06-11 08:23:42.057040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.509 [2024-06-11 08:23:42.057388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.509 [2024-06-11 08:23:42.057414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.509 qpair failed and we were unable to recover it. 00:31:11.509 [2024-06-11 08:23:42.057729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.509 [2024-06-11 08:23:42.058074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.509 [2024-06-11 08:23:42.058100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.509 qpair failed and we were unable to recover it. 00:31:11.509 [2024-06-11 08:23:42.058459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.509 [2024-06-11 08:23:42.058712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.509 [2024-06-11 08:23:42.058737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.509 qpair failed and we were unable to recover it. 00:31:11.509 [2024-06-11 08:23:42.058986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.509 [2024-06-11 08:23:42.059344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.509 [2024-06-11 08:23:42.059371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.509 qpair failed and we were unable to recover it. 00:31:11.509 [2024-06-11 08:23:42.059733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.509 [2024-06-11 08:23:42.059966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.509 [2024-06-11 08:23:42.059993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.509 qpair failed and we were unable to recover it. 00:31:11.509 [2024-06-11 08:23:42.060324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.509 [2024-06-11 08:23:42.060530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.509 [2024-06-11 08:23:42.060559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.509 qpair failed and we were unable to recover it. 00:31:11.509 [2024-06-11 08:23:42.060945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.509 [2024-06-11 08:23:42.061279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.509 [2024-06-11 08:23:42.061306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.509 qpair failed and we were unable to recover it. 00:31:11.510 [2024-06-11 08:23:42.061669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.510 [2024-06-11 08:23:42.061993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.510 [2024-06-11 08:23:42.062019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.510 qpair failed and we were unable to recover it. 00:31:11.510 [2024-06-11 08:23:42.062384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.510 [2024-06-11 08:23:42.062757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.510 [2024-06-11 08:23:42.062785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.510 qpair failed and we were unable to recover it. 00:31:11.510 [2024-06-11 08:23:42.063127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.510 [2024-06-11 08:23:42.063488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.510 [2024-06-11 08:23:42.063515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.510 qpair failed and we were unable to recover it. 00:31:11.510 [2024-06-11 08:23:42.063747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.510 [2024-06-11 08:23:42.064103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.510 [2024-06-11 08:23:42.064130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.510 qpair failed and we were unable to recover it. 00:31:11.510 [2024-06-11 08:23:42.064476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.510 [2024-06-11 08:23:42.064823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.510 [2024-06-11 08:23:42.064849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.510 qpair failed and we were unable to recover it. 00:31:11.510 [2024-06-11 08:23:42.065239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.510 [2024-06-11 08:23:42.065523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.510 [2024-06-11 08:23:42.065549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.511 qpair failed and we were unable to recover it. 00:31:11.511 [2024-06-11 08:23:42.065880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.511 [2024-06-11 08:23:42.066208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.511 [2024-06-11 08:23:42.066235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.511 qpair failed and we were unable to recover it. 00:31:11.511 [2024-06-11 08:23:42.066582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.511 [2024-06-11 08:23:42.066796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.511 [2024-06-11 08:23:42.066824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.511 qpair failed and we were unable to recover it. 00:31:11.511 [2024-06-11 08:23:42.067187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.511 [2024-06-11 08:23:42.067519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.511 [2024-06-11 08:23:42.067546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.511 qpair failed and we were unable to recover it. 00:31:11.511 [2024-06-11 08:23:42.067913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.511 [2024-06-11 08:23:42.068256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.511 [2024-06-11 08:23:42.068283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.511 qpair failed and we were unable to recover it. 00:31:11.511 [2024-06-11 08:23:42.068663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.511 [2024-06-11 08:23:42.069007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.511 [2024-06-11 08:23:42.069034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.511 qpair failed and we were unable to recover it. 00:31:11.511 [2024-06-11 08:23:42.069396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.511 [2024-06-11 08:23:42.069776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.512 [2024-06-11 08:23:42.069803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.512 qpair failed and we were unable to recover it. 00:31:11.512 [2024-06-11 08:23:42.070130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.512 [2024-06-11 08:23:42.070477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.512 [2024-06-11 08:23:42.070504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.512 qpair failed and we were unable to recover it. 00:31:11.512 [2024-06-11 08:23:42.070935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.512 [2024-06-11 08:23:42.071249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.512 [2024-06-11 08:23:42.071275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.512 qpair failed and we were unable to recover it. 00:31:11.512 [2024-06-11 08:23:42.071612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.512 [2024-06-11 08:23:42.071975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.512 [2024-06-11 08:23:42.072001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.512 qpair failed and we were unable to recover it. 00:31:11.512 [2024-06-11 08:23:42.072335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.512 [2024-06-11 08:23:42.072687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.512 [2024-06-11 08:23:42.072715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.512 qpair failed and we were unable to recover it. 00:31:11.512 [2024-06-11 08:23:42.073067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.512 [2024-06-11 08:23:42.073395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.512 [2024-06-11 08:23:42.073421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.512 qpair failed and we were unable to recover it. 00:31:11.512 [2024-06-11 08:23:42.073667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.512 [2024-06-11 08:23:42.073976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.512 [2024-06-11 08:23:42.074003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.512 qpair failed and we were unable to recover it. 00:31:11.512 [2024-06-11 08:23:42.074358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.512 [2024-06-11 08:23:42.074695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.512 [2024-06-11 08:23:42.074722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.512 qpair failed and we were unable to recover it. 00:31:11.512 [2024-06-11 08:23:42.075065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.512 [2024-06-11 08:23:42.075391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.513 [2024-06-11 08:23:42.075417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.513 qpair failed and we were unable to recover it. 00:31:11.513 [2024-06-11 08:23:42.075643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.513 [2024-06-11 08:23:42.075982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.513 [2024-06-11 08:23:42.076009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.513 qpair failed and we were unable to recover it. 00:31:11.513 [2024-06-11 08:23:42.076361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.513 [2024-06-11 08:23:42.076706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.513 [2024-06-11 08:23:42.076734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.513 qpair failed and we were unable to recover it. 00:31:11.513 [2024-06-11 08:23:42.077086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.513 [2024-06-11 08:23:42.077416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.513 [2024-06-11 08:23:42.077450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.513 qpair failed and we were unable to recover it. 00:31:11.513 [2024-06-11 08:23:42.077817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.513 [2024-06-11 08:23:42.078163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.513 [2024-06-11 08:23:42.078191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.513 qpair failed and we were unable to recover it. 00:31:11.513 [2024-06-11 08:23:42.078513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.513 [2024-06-11 08:23:42.078856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.513 [2024-06-11 08:23:42.078883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.513 qpair failed and we were unable to recover it. 00:31:11.513 [2024-06-11 08:23:42.079240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.513 [2024-06-11 08:23:42.079568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.513 [2024-06-11 08:23:42.079595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.513 qpair failed and we were unable to recover it. 00:31:11.514 [2024-06-11 08:23:42.079869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.514 [2024-06-11 08:23:42.080243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.514 [2024-06-11 08:23:42.080269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.514 qpair failed and we were unable to recover it. 00:31:11.514 [2024-06-11 08:23:42.080604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.514 [2024-06-11 08:23:42.080960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.514 [2024-06-11 08:23:42.080987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.514 qpair failed and we were unable to recover it. 00:31:11.514 [2024-06-11 08:23:42.081341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.514 [2024-06-11 08:23:42.081681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.514 [2024-06-11 08:23:42.081707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.514 qpair failed and we were unable to recover it. 00:31:11.514 [2024-06-11 08:23:42.082016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.514 [2024-06-11 08:23:42.082387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.514 [2024-06-11 08:23:42.082412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.514 qpair failed and we were unable to recover it. 00:31:11.514 [2024-06-11 08:23:42.082806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.514 [2024-06-11 08:23:42.083147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.514 [2024-06-11 08:23:42.083173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.514 qpair failed and we were unable to recover it. 00:31:11.514 [2024-06-11 08:23:42.083539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.514 [2024-06-11 08:23:42.083866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.514 [2024-06-11 08:23:42.083892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.514 qpair failed and we were unable to recover it. 00:31:11.515 [2024-06-11 08:23:42.084261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.515 [2024-06-11 08:23:42.084485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.515 [2024-06-11 08:23:42.084516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.515 qpair failed and we were unable to recover it. 00:31:11.515 [2024-06-11 08:23:42.084772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.515 [2024-06-11 08:23:42.085126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.515 [2024-06-11 08:23:42.085154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.515 qpair failed and we were unable to recover it. 00:31:11.515 [2024-06-11 08:23:42.085487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.515 [2024-06-11 08:23:42.085832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.515 [2024-06-11 08:23:42.085859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.515 qpair failed and we were unable to recover it. 00:31:11.515 [2024-06-11 08:23:42.086216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.515 [2024-06-11 08:23:42.086560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.515 [2024-06-11 08:23:42.086589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.515 qpair failed and we were unable to recover it. 00:31:11.515 [2024-06-11 08:23:42.086948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.515 [2024-06-11 08:23:42.087296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.515 [2024-06-11 08:23:42.087323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.515 qpair failed and we were unable to recover it. 00:31:11.515 [2024-06-11 08:23:42.087589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.515 [2024-06-11 08:23:42.087926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.515 [2024-06-11 08:23:42.087952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.515 qpair failed and we were unable to recover it. 00:31:11.515 [2024-06-11 08:23:42.088270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.515 [2024-06-11 08:23:42.088613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.515 [2024-06-11 08:23:42.088641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.515 qpair failed and we were unable to recover it. 00:31:11.515 [2024-06-11 08:23:42.089004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.515 [2024-06-11 08:23:42.089333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.515 [2024-06-11 08:23:42.089359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.515 qpair failed and we were unable to recover it. 00:31:11.515 [2024-06-11 08:23:42.089700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.516 [2024-06-11 08:23:42.090029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.516 [2024-06-11 08:23:42.090054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.516 qpair failed and we were unable to recover it. 00:31:11.516 [2024-06-11 08:23:42.090282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.516 [2024-06-11 08:23:42.090532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.516 [2024-06-11 08:23:42.090560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.516 qpair failed and we were unable to recover it. 00:31:11.516 [2024-06-11 08:23:42.090914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.516 [2024-06-11 08:23:42.091263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.516 [2024-06-11 08:23:42.091289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.516 qpair failed and we were unable to recover it. 00:31:11.516 [2024-06-11 08:23:42.091627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.516 [2024-06-11 08:23:42.091997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.516 [2024-06-11 08:23:42.092024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.516 qpair failed and we were unable to recover it. 00:31:11.516 [2024-06-11 08:23:42.092347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.516 [2024-06-11 08:23:42.092674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.516 [2024-06-11 08:23:42.092701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.516 qpair failed and we were unable to recover it. 00:31:11.516 [2024-06-11 08:23:42.093045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.516 [2024-06-11 08:23:42.093401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.516 [2024-06-11 08:23:42.093427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.516 qpair failed and we were unable to recover it. 00:31:11.516 [2024-06-11 08:23:42.093799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.517 [2024-06-11 08:23:42.094149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.517 [2024-06-11 08:23:42.094175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.517 qpair failed and we were unable to recover it. 00:31:11.517 [2024-06-11 08:23:42.094504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.517 [2024-06-11 08:23:42.094866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.517 [2024-06-11 08:23:42.094893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.517 qpair failed and we were unable to recover it. 00:31:11.517 [2024-06-11 08:23:42.095252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.517 [2024-06-11 08:23:42.095586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.517 [2024-06-11 08:23:42.095613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.517 qpair failed and we were unable to recover it. 00:31:11.517 [2024-06-11 08:23:42.095947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.517 [2024-06-11 08:23:42.096304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.517 [2024-06-11 08:23:42.096330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.517 qpair failed and we were unable to recover it. 00:31:11.517 [2024-06-11 08:23:42.096707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.517 [2024-06-11 08:23:42.097037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.517 [2024-06-11 08:23:42.097062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.517 qpair failed and we were unable to recover it. 00:31:11.517 [2024-06-11 08:23:42.097399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.517 [2024-06-11 08:23:42.097748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.518 [2024-06-11 08:23:42.097777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.518 qpair failed and we were unable to recover it. 00:31:11.518 [2024-06-11 08:23:42.098123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.518 [2024-06-11 08:23:42.098470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.518 [2024-06-11 08:23:42.098496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.518 qpair failed and we were unable to recover it. 00:31:11.518 [2024-06-11 08:23:42.098846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.518 [2024-06-11 08:23:42.099214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.518 [2024-06-11 08:23:42.099240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.518 qpair failed and we were unable to recover it. 00:31:11.518 [2024-06-11 08:23:42.099617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.518 [2024-06-11 08:23:42.099836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.518 [2024-06-11 08:23:42.099862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.520 qpair failed and we were unable to recover it. 00:31:11.520 [2024-06-11 08:23:42.100229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.520 [2024-06-11 08:23:42.100631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.520 [2024-06-11 08:23:42.100658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.520 qpair failed and we were unable to recover it. 00:31:11.520 [2024-06-11 08:23:42.101012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.520 [2024-06-11 08:23:42.101361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.520 [2024-06-11 08:23:42.101387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.520 qpair failed and we were unable to recover it. 00:31:11.520 [2024-06-11 08:23:42.101757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.520 [2024-06-11 08:23:42.102129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.520 [2024-06-11 08:23:42.102155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.520 qpair failed and we were unable to recover it. 00:31:11.787 [2024-06-11 08:23:42.102488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.102802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.102829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.787 qpair failed and we were unable to recover it. 00:31:11.787 [2024-06-11 08:23:42.103170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.103391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.103420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.787 qpair failed and we were unable to recover it. 00:31:11.787 [2024-06-11 08:23:42.103766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.104102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.104129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.787 qpair failed and we were unable to recover it. 00:31:11.787 [2024-06-11 08:23:42.104461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.104820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.104846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.787 qpair failed and we were unable to recover it. 00:31:11.787 [2024-06-11 08:23:42.105191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.105550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.105579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.787 qpair failed and we were unable to recover it. 00:31:11.787 [2024-06-11 08:23:42.105941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.106268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.106294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.787 qpair failed and we were unable to recover it. 00:31:11.787 [2024-06-11 08:23:42.106661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.106880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.106908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.787 qpair failed and we were unable to recover it. 00:31:11.787 [2024-06-11 08:23:42.107332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.107635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.107663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.787 qpair failed and we were unable to recover it. 00:31:11.787 [2024-06-11 08:23:42.108011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.108342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.108367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.787 qpair failed and we were unable to recover it. 00:31:11.787 [2024-06-11 08:23:42.108701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.109063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.109089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.787 qpair failed and we were unable to recover it. 00:31:11.787 [2024-06-11 08:23:42.109396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.109754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.109781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.787 qpair failed and we were unable to recover it. 00:31:11.787 [2024-06-11 08:23:42.110097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.110433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.110467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.787 qpair failed and we were unable to recover it. 00:31:11.787 [2024-06-11 08:23:42.110803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.110946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.110972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.787 qpair failed and we were unable to recover it. 00:31:11.787 [2024-06-11 08:23:42.111355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.111692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.787 [2024-06-11 08:23:42.111719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.787 qpair failed and we were unable to recover it. 00:31:11.787 [2024-06-11 08:23:42.112060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.112428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.112466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.112810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.113009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.113038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.113386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.113720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.113748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.114104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.114466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.114494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.114844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.115168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.115194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.115550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.115878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.115904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.116157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.116480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.116506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.116853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.117201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.117228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.117547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.117905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.117932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.118289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.118605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.118633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.118992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.119318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.119343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.119688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.120048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.120075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.120425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.120761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.120788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.121143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.121375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.121400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.121774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.122101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.122127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.122529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.122888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.122914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.123037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.123350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.123376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.123727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.124159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.124185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.124515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.124848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.124874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.125200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.125567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.125594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.125948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.126267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.126293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.126631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.126951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.126977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.127330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.127697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.127726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.127960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.128304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.128331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.128709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.129066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.129092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.129520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.129851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.129876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.130247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.130597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.130625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.130977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.131290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.131316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.788 [2024-06-11 08:23:42.131635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.131937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.788 [2024-06-11 08:23:42.131962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.788 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.132360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.132678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.132705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.133059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.133412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.133447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.133812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.134140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.134167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.134548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.134894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.134920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.135168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.135484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.135511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.135860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.136070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.136098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.136360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.136708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.136735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.137103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.137435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.137470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.137887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.138290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.138316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.138657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.139021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.139048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.140747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.141113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.141144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.141466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.141777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.141803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.142099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.142363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.142390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.142753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.143089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.143116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.143490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.143858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.143884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.144251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.144577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.144604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.144948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.145307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.145333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.145715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.146061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.146087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.146318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.146689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.146718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.147061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.147418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.147455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.147797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.148141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.148168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.148415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.148749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.148776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.149094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.149211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.149244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.149559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.149880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.149908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.150214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.150557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.150584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.150936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.151289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.151315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.151647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.152012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.152038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.152398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.152745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.152772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.789 qpair failed and we were unable to recover it. 00:31:11.789 [2024-06-11 08:23:42.153130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.153478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.789 [2024-06-11 08:23:42.153507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.153861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.154196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.154222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.154549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.154894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.154920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.155126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.155466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.155494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.155828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.156154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.156186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.156541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.156900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.156926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.157213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.157568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.157595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.157929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.158249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.158275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.158626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.158988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.159014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.159349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.159658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.159685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.160053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.160260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.160285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.160616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.160939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.160965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.161303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.161614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.161642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.162017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.162310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.162335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.162734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.163045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.163077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.163454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.163797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.163823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.164172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.164512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.164540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.164914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.165237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.165264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.165655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.166013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.166040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.166401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.166623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.166650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.166880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.167235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.167262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.167638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.168004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.168029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.168374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.168610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.168636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.169000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.169333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.169360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.169692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.170039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.170066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.170315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.170542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.170572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.170926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.171275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.171301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.171641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.171955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.171981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.172322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.172662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.172689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.790 qpair failed and we were unable to recover it. 00:31:11.790 [2024-06-11 08:23:42.172897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.790 [2024-06-11 08:23:42.173242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.173268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.173615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.173962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.173989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.174363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.174691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.174718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.175061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.175280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.175310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.175529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.175899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.175926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.176172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.176519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.176546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.176880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.177222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.177248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.177511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.177882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.177909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.178259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.178609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.178636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.179005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.179236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.179265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.179620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.179967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.179993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.180240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.180551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.180578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.180940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.181246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.181272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.181484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.181845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.181871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.182198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.182567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.182594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.182925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.183249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.183275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.183635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.183965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.183992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.184370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.184773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.184801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.185160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.185432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.185478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.185708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.185843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.185871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.186269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.186597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.186625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.186951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.187169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.187197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.187526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.187877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.187903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.188267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.188589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.188615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.188934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.189282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.189308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.189712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.190067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.190092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.190466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.190856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.190883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.791 qpair failed and we were unable to recover it. 00:31:11.791 [2024-06-11 08:23:42.191307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.791 [2024-06-11 08:23:42.191621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.191648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.192019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.192244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.192269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.192619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.192960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.192986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.193352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.193706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.193736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.194130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.194452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.194484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.194838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.195144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.195170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.195503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.195848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.195874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.196251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.196605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.196631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.196995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.197342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.197368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.197624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.198003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.198030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.198379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.198708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.198734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.199140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.199473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.199500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.199837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.200063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.200089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.200513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.200863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.200889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.201274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.201628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.201654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.201981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.202234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.202262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.202599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.202947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.202972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.203299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.203599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.203625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.203964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.204319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.204344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.204745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.204976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.205001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.205232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.205527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.205561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.205878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.206215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.206241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.206490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.206813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.206839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.792 qpair failed and we were unable to recover it. 00:31:11.792 [2024-06-11 08:23:42.207221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.792 [2024-06-11 08:23:42.207543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.207571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.207899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.208245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.208271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.208531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.208881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.208907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.209234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.209633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.209661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.210009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.210237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.210265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.210642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.210992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.211019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.211383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.211694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.211722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.211975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.212203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.212229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.212533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.212893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.212918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.213266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.213511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.213537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.213894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.214231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.214256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.214491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.214722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.214747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.214971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.215289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.215314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.215742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.216062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.216087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.216386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.216612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.216639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.216967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.217234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.217260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.217577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.217901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.217927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.218250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.218598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.218625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.218860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.219096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.219122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.219436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.219703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.219729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.219958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.220278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.220305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.220554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.220793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.220820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.221178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.221486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.221514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.221829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.222181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.222208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.222468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.222800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.222827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.223050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.223282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.223308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.223588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.223835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.223861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.224207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.224504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.224531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.224868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.225112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.225137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.793 qpair failed and we were unable to recover it. 00:31:11.793 [2024-06-11 08:23:42.225414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.793 [2024-06-11 08:23:42.225820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.225847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.226192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.226526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.226553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.226885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.227213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.227239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.227501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.227754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.227782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.228174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.228580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.228608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.228980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.229325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.229351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.229742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.229986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.230015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.230378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.230728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.230756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.231095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.231436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.231470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.231915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.232265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.232291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.232663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.232993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.233020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.233374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.233733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.233762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.234026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.234346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.234372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.234714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.235041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.235067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.235420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.235775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.235802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.236148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.236403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.236429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.236806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.237031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.237059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.237249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.237578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.237606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.237938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.238280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.238306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.238550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.238873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.238899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.239239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.239591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.239618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.240002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.240355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.240381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.240755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.241103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.241129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.241537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.241884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.241909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.242261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.242621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.242647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.243002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.243350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.243377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.243684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.243936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.243961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.244206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.244553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.244580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.244923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.245251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.245277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.794 qpair failed and we were unable to recover it. 00:31:11.794 [2024-06-11 08:23:42.245626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.794 [2024-06-11 08:23:42.245968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.245993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.246245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.246607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.246635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.246986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.247340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.247366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.247633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.247958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.247984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.248111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.248451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.248478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.248856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.249182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.249208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.249458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.249587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.249614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.249956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.250284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.250310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.250728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.250923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.250957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.251333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.251744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.251771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.252000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.252386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.252412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.252680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.253026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.253053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.253400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.253758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.253786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.254034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.254399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.254425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.254886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.255242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.255268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.255542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.255898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.255923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.256269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.256613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.256647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.257008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.257365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.257391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.257623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.257832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.257867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.258201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.258526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.258553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.258912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.259146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.259174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.259646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.259968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.259993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.260399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.260756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.260783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.261145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.261501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.261528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.261869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.262214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.262240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.262584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.262927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.262953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.263305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.263640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.263668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.264025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.264357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.264382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.264615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.264958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.264990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.795 qpair failed and we were unable to recover it. 00:31:11.795 [2024-06-11 08:23:42.265310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.795 [2024-06-11 08:23:42.265537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.265563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.796 qpair failed and we were unable to recover it. 00:31:11.796 [2024-06-11 08:23:42.265904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.266241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.266267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.796 qpair failed and we were unable to recover it. 00:31:11.796 [2024-06-11 08:23:42.266596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.266905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.266930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.796 qpair failed and we were unable to recover it. 00:31:11.796 [2024-06-11 08:23:42.267262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.267593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.267619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.796 qpair failed and we were unable to recover it. 00:31:11.796 [2024-06-11 08:23:42.267962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.268314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.268339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.796 qpair failed and we were unable to recover it. 00:31:11.796 [2024-06-11 08:23:42.268677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.268894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.268922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.796 qpair failed and we were unable to recover it. 00:31:11.796 [2024-06-11 08:23:42.269260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.269598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.269626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.796 qpair failed and we were unable to recover it. 00:31:11.796 [2024-06-11 08:23:42.269999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.270362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.270388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.796 qpair failed and we were unable to recover it. 00:31:11.796 [2024-06-11 08:23:42.270734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.271063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.271090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.796 qpair failed and we were unable to recover it. 00:31:11.796 [2024-06-11 08:23:42.271436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.271812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.271846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.796 qpair failed and we were unable to recover it. 00:31:11.796 [2024-06-11 08:23:42.272064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.272409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.272435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.796 qpair failed and we were unable to recover it. 00:31:11.796 [2024-06-11 08:23:42.272807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.273058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.273084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.796 qpair failed and we were unable to recover it. 00:31:11.796 [2024-06-11 08:23:42.273302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.273646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.273680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.796 qpair failed and we were unable to recover it. 00:31:11.796 [2024-06-11 08:23:42.274049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.274369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.274394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.796 qpair failed and we were unable to recover it. 00:31:11.796 [2024-06-11 08:23:42.274675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.275041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.275066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.796 qpair failed and we were unable to recover it. 00:31:11.796 [2024-06-11 08:23:42.275406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.275802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.275828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.796 qpair failed and we were unable to recover it. 00:31:11.796 [2024-06-11 08:23:42.276208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.276568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.276594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.796 qpair failed and we were unable to recover it. 00:31:11.796 [2024-06-11 08:23:42.277001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.277220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.796 [2024-06-11 08:23:42.277247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.277534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.277783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.277809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.278174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.278538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.278565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.278845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.279177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.279203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.279611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.279973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.279998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.280359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.280564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.280590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.280933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.281275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.281301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.281679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.281923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.281948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.282327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.282621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.282647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.282989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.283357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.283383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.283691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.284050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.284076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.284405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.284797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.284825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.285174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.285545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.285573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.285804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.286158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.286184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.286354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.286827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.286854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.287069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.287418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.287451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.287811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.288201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.288226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.288582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.288912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.288938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.289319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.289674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.289702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.290079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.290404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.290429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.290780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.291156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.291181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.291474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.291713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.291740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.292146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.292381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.797 [2024-06-11 08:23:42.292409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.797 qpair failed and we were unable to recover it. 00:31:11.797 [2024-06-11 08:23:42.292763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.292997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.293022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.293395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.293749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.293776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.294138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.294458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.294485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.294759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.294992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.295017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.295346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.295555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.295581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.295940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.296252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.296278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.296640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.296956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.296982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.297320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.297569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.297596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.297927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.298271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.298296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.298672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.299021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.299047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.299429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.299835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.299861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.300236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.300578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.300605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.300937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.301258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.301285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.301551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.301925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.301950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.302315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.302566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.302597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.302951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.303263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.303289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.303676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.303916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.303941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.304324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.304565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.304592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.304945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.305276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.305302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.305673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.306030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.306056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.306404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.306757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.306784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.307030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.307349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.307375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.307754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.308068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.308094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.308478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.308868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.308894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.309249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.309496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.309522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.309907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.310269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.310295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.310682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.311048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.311074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.311417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.311783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.311810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.798 qpair failed and we were unable to recover it. 00:31:11.798 [2024-06-11 08:23:42.312104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.312468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.798 [2024-06-11 08:23:42.312496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.312843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.313071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.313100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.313257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.313669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.313697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.314030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.314369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.314403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.314753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.315084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.315109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.315344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.315765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.315792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.316153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.316502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.316529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.316897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.317276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.317303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.317678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.318014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.318040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.318399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.318760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.318786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.319147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.319539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.319566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.319920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.320246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.320271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.320522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.320876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.320901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.321258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.321623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.321651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.322030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.322248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.322276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.322518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.322876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.322903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.323098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.323336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.323363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.323750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.323988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.324014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.324372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.324586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.324612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.324965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.325308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.325333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.325740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.326088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.326115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.326465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.326820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.326845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.327263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.327612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.327639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.327964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.328223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.328249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.328610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.328980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.329006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.329387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.329625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.329652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.329994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.330331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.330356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.330756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.331131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.331156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.331481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.331828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.331854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.799 qpair failed and we were unable to recover it. 00:31:11.799 [2024-06-11 08:23:42.332238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.799 [2024-06-11 08:23:42.332598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.332625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.332980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.333331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.333358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.333721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.334061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.334087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.334465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.334810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.334844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.335196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.335521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.335548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.337379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.337774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.337805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.338140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.338501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.338531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.338886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.339289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.339315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.339653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.340005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.340031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.340362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.340693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.340720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.341091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.341465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.341492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.341831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.342152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.342177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.342532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.342865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.342891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.343271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.343615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.343643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.343998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.344330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.344356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.344704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.345070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.345097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.345463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.345809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.345834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.346138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.346481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.346509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.346878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.347226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.347253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.347595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.347945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.347971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.348290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.348616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.348647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.348986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.349339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.349366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.349691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.350028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.350054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.350423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.350799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.350826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.351228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.351462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.351493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.351846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.352156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.352184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.352547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.352866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.352891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.353248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.353609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.353636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.354010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.354230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.354258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.800 qpair failed and we were unable to recover it. 00:31:11.800 [2024-06-11 08:23:42.354604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.800 [2024-06-11 08:23:42.354871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.354897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.355257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.355508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.355539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.355863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.356269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.356296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.356645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.357031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.357057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.357418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.357773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.357801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.358135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.358479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.358507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.358856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.359177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.359203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.359554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.359790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.359820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.360184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.360500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.360527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.360867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.361197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.361223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.361570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.361822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.361848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.362192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.362518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.362545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.362791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.363076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.363103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.363472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.363823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.363849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.364192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.364516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.364550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.364909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.365239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.365265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.365626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.365949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.365975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.366231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.366559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.366586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.366953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.367298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.367325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.367721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.368075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.368101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.368466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.368710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.368739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.369097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.369446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.369474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.369806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.370143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.370171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.370532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.370894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.370920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.801 [2024-06-11 08:23:42.371333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.371688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.801 [2024-06-11 08:23:42.371721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.801 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.372071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.372393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.372419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.372699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.373046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.373073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.373454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.373795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.373821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.374184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.374402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.374431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.374775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.375129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.375155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.375521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.375931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.375957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.376328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.376667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.376694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.377052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.377400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.377426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.377802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.378020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.378048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.378482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.378799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.378832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.379169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.379522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.379550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.379892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.380238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.380264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.380510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.380864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.380890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.381249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.381669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.381696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.382069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.382409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.382436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.382788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.383165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.383191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.383566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.383926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.383953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.384311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.384662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.384688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.385037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.385391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.385418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.385771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.386109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.386140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.386537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.386914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.386940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.387284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.387620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.387647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.388012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.388422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.388469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.388843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.389065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.389095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.389457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.389796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.389822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.390167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.390404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.390430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.390647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.390989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.391016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.391353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.391689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.391716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.802 qpair failed and we were unable to recover it. 00:31:11.802 [2024-06-11 08:23:42.392062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.392433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.802 [2024-06-11 08:23:42.392468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.392827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.393193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.393219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.393570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.393927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.393953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.394309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.394640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.394667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.395043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.395383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.395408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.395754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.396153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.396179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.396531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.396867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.396894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.397235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.397564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.397592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.397958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.398320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.398347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.398693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.399013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.399040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.399283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.399619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.399647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.399936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.400280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.400306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.400656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.401005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.401032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.401399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.401772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.401801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.402063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.402411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.402447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.402778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.403118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.403145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.403508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.403888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.403914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.404274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.404622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.404649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.405029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.405350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.405376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.405711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.406046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.406073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.406446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.406791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.406817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.407175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.407529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.407556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.407932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.408287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.408314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.408535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.410169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.410221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.410611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.410982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.411009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.411369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.411696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.411723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.412060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.412410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.412447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.412798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.413028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.413058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.413412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.413737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.413764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.803 qpair failed and we were unable to recover it. 00:31:11.803 [2024-06-11 08:23:42.414124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.803 [2024-06-11 08:23:42.414368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.414397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.804 qpair failed and we were unable to recover it. 00:31:11.804 [2024-06-11 08:23:42.414844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.415089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.415116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.804 qpair failed and we were unable to recover it. 00:31:11.804 [2024-06-11 08:23:42.415481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.415803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.415830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.804 qpair failed and we were unable to recover it. 00:31:11.804 [2024-06-11 08:23:42.416215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.416560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.416588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.804 qpair failed and we were unable to recover it. 00:31:11.804 [2024-06-11 08:23:42.416957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.417301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.417327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.804 qpair failed and we were unable to recover it. 00:31:11.804 [2024-06-11 08:23:42.417685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.418041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.418068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.804 qpair failed and we were unable to recover it. 00:31:11.804 [2024-06-11 08:23:42.418426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.418776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.418802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.804 qpair failed and we were unable to recover it. 00:31:11.804 [2024-06-11 08:23:42.419185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.419537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.419565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.804 qpair failed and we were unable to recover it. 00:31:11.804 [2024-06-11 08:23:42.419921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.420290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.420316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.804 qpair failed and we were unable to recover it. 00:31:11.804 [2024-06-11 08:23:42.420669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.421043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.421070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.804 qpair failed and we were unable to recover it. 00:31:11.804 [2024-06-11 08:23:42.421256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.421474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.421505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.804 qpair failed and we were unable to recover it. 00:31:11.804 [2024-06-11 08:23:42.421839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.424042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.424103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.804 qpair failed and we were unable to recover it. 00:31:11.804 [2024-06-11 08:23:42.424483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.424869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.424897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.804 qpair failed and we were unable to recover it. 00:31:11.804 [2024-06-11 08:23:42.425234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.425597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.425624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.804 qpair failed and we were unable to recover it. 00:31:11.804 [2024-06-11 08:23:42.425964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.426298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.426323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.804 qpair failed and we were unable to recover it. 00:31:11.804 [2024-06-11 08:23:42.426708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.427104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.427131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.804 qpair failed and we were unable to recover it. 00:31:11.804 [2024-06-11 08:23:42.427486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.427729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:11.804 [2024-06-11 08:23:42.427757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:11.804 qpair failed and we were unable to recover it. 00:31:12.073 [2024-06-11 08:23:42.428136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-06-11 08:23:42.428478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-06-11 08:23:42.428506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-06-11 08:23:42.428877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-06-11 08:23:42.429218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-06-11 08:23:42.429245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-06-11 08:23:42.430991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-06-11 08:23:42.431381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-06-11 08:23:42.431412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.073 qpair failed and we were unable to recover it. 00:31:12.073 [2024-06-11 08:23:42.431674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.073 [2024-06-11 08:23:42.432038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.432065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.432401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.432756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.432784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.433143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.433476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.433503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.433858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.434092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.434118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.434455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.434818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.434844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.435265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.435577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.435607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.435859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.436205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.436232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.436500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.436886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.436912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.437270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.437623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.437650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.438021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.438368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.438396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.438770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.439000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.439030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.439458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.439783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.439810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.440169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.440517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.440551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.440898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.441244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.441270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.441616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.441985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.442011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.442244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.442646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.442673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.443023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.443380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.443407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.443787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.444117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.444143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.444565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.444915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.444942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.445305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.445666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.445693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.446041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.446287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.446313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.446704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.447046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.447073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.447434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.447792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.447819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.448173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.448464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.448490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.448847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.449164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.449190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.449549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.449828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.449855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.450267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.450665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.450691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.451043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.451406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.451432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.074 qpair failed and we were unable to recover it. 00:31:12.074 [2024-06-11 08:23:42.451828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.074 [2024-06-11 08:23:42.452176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.452204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.452555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.452913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.452940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.453296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.453614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.453642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.454015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.454368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.454395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.454729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.455075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.455102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.455476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.455866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.455893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.456140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.456513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.456541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.456864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.457116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.457142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.457484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.457812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.457838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.458207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.458527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.458555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.458798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.459118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.459145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.459515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.459882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.459907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.460273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.460492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.460521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.460897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.461228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.461256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.461615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.461949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.461976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.462344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.462707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.462735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.463097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.463456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.463486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.463847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.464291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.464318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.464701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.465060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.465087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.465411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.465731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.465758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.466136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.466461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.466488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.466842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.467183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.467210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.467458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.467820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.467847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.468209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.468565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.468592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.468847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.469212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.469238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.469581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.469939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.469965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.470416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.470782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.470810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.471034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.471391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.471417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.075 qpair failed and we were unable to recover it. 00:31:12.075 [2024-06-11 08:23:42.471777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.472128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.075 [2024-06-11 08:23:42.472154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.472563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.472925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.472952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.473310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.473638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.473666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.473918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.474280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.474308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.474596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.474958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.474985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.475267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.475613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.475641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.476016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.476372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.476399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.476773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.477175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.477202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.477537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.477772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.477800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.478166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.478507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.478533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.478903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.479271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.479298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.479640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.480022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.480048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.480316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.480696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.480722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.481062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.481462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.481490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.481850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.482184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.482210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.482562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.482984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.483011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.483339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.483669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.483698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.484075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.484399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.484448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.484815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.485140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.485166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.485536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.485917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.485943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.486220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.486543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.486571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.486946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.487284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.487311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.487663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.488003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.488029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.488389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.488732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.488761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.489105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.489470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.489497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.489846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.490194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.490220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.490588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.490846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.490872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.491285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.491626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.491660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.491965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.492226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.076 [2024-06-11 08:23:42.492253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.076 qpair failed and we were unable to recover it. 00:31:12.076 [2024-06-11 08:23:42.492618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.492956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.492983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.493335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.493675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.493701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.494059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.494418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.494454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.494786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.495180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.495206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.495582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.495835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.495865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.496235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.496571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.496598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.496865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.497113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.497141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.497363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.497701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.497728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.498082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.498459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.498491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.498848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.499191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.499216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.499617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.499963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.499990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.500354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.500719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.500747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.501114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.501460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.501489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.501864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.502223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.502249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.502473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.502748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.502775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.503140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.503531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.503559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.503942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.504275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.504302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.504682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.505007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.505033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.505390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.505747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.505774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.506128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.506490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.506518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.506879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.507203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.507229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.507576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.507955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.507981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.508330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.508681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.508710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.509068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.509417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.509451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.509856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.510198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.510231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.510462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.510804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.510832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.511207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.511564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.511592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.511952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.512320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.512348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.077 qpair failed and we were unable to recover it. 00:31:12.077 [2024-06-11 08:23:42.512692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.077 [2024-06-11 08:23:42.512950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.512976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-06-11 08:23:42.513327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.513726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.513753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-06-11 08:23:42.514077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.514435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.514471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-06-11 08:23:42.514827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.515107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.515134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-06-11 08:23:42.515505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.517254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.517308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-06-11 08:23:42.517613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.517950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.517977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-06-11 08:23:42.518349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.518578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.518608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-06-11 08:23:42.518977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.519394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.519420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-06-11 08:23:42.519791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.520149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.520177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-06-11 08:23:42.520544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.520879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.520904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-06-11 08:23:42.521297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.521621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.521648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-06-11 08:23:42.522013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.522373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.522399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-06-11 08:23:42.522761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.523116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.523143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-06-11 08:23:42.523509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.523835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.523861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-06-11 08:23:42.524230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.524472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.524500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-06-11 08:23:42.524874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.525224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.525252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-06-11 08:23:42.525633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.525864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.525892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-06-11 08:23:42.526123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.526430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.526464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-06-11 08:23:42.526841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.527185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.527211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.078 qpair failed and we were unable to recover it. 00:31:12.078 [2024-06-11 08:23:42.527573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.527944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.078 [2024-06-11 08:23:42.527971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.528219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.528549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.528580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.528939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.529312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.529341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.529690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.530044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.530070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.530453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.530747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.530773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.531107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.531425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.531459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.531592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.531987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.532014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.532298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.532756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.532784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.533010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.533339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.533366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.533766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.534128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.534153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.534526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.534891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.534917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.535193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.535555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.535582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.535810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.536170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.536197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.536650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.537016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.537043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.537417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.537871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.537898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.538257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.538536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.538564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.538947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.539186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.539211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.539587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.539944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.539970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.540327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.540591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.540618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.540864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.541023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.541048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.541397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.541722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.541749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.542126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.542473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.542500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.542886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.543124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.543150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.543525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.543893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.543919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.544285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.544635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.544662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.544929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.545175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.545200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.545478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.545845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.545873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.546222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.546549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.546575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.546944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.547272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.079 [2024-06-11 08:23:42.547298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.079 qpair failed and we were unable to recover it. 00:31:12.079 [2024-06-11 08:23:42.547714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.548068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.548095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.548461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.548886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.548912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.549183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.549424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.549464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.549803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.550147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.550173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.550455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.550835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.550861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.551201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.551447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.551474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.551829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.552170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.552196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.552491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.552873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.552899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.553255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.553515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.553542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.553941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.554306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.554332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.554691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.554899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.554927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.555282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.555633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.555660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.555847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.556156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.556184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.556525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.556785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.556814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.556960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.557323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.557349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.557633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.557996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.558021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.558409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.558671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.558699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.559088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.559246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.559272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.559662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.559988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.560015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.560361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.560622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.560652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.561002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.561284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.561310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.561727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.562071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.562098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.562471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.562727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.562752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.563135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.563379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.563408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.563768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.563982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.564008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.564351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.564545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.564573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.564859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.565214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.565240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.565491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.565723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.565753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.080 [2024-06-11 08:23:42.566015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.566352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.080 [2024-06-11 08:23:42.566379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.080 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.566759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.567011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.567038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.567382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.567647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.567674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.568030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.568371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.568397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.568675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.568906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.568935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.569312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.569663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.569690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.570053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.570396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.570422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.570699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.571060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.571086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.571479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.571756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.571782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.572158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.572555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.572583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.572968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.573334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.573361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.573661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.574063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.574090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.574523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.574941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.574968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.575411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.575771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.575798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.576053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.576398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.576425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.576809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.577133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.577161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.577413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.577767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.577794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.578165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.578387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.578415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.578818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.579176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.579203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.579473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.579752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.579778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.580147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.580473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.580500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.580659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.581014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.581041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.581297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.581681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.581708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.582081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.582448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.582476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.582844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.583192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.583218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.583624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.584067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.584094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.584459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.584720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.584746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.585111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.585436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.585469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.585759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.586069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.586095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.081 qpair failed and we were unable to recover it. 00:31:12.081 [2024-06-11 08:23:42.586484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.081 [2024-06-11 08:23:42.586844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.586871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.587252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.587621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.587648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.588011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.588373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.588399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.588740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.589098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.589125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.589531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.589903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.589930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.590337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.590719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.590745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.591015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.591281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.591312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.591613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.591953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.591980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.592361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.592712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.592738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.592999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.593379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.593405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.593722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.594067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.594093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.594500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.594919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.594944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.595327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.595733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.595761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.596143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.596488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.596514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.596874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.597121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.597146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.597388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.597695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.597722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.597984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.598322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.598354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.598774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.599125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.599151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.599496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.599891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.599917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.600167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.600525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.600551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.600905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.601145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.601171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.601581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.602007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.602034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.602405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.602828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.602854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.603212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.603424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.603458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.603880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.604107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.604133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.604454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.604719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.604749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.605091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.605433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.605475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.605721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.606081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.606107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.082 [2024-06-11 08:23:42.606479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.606843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.082 [2024-06-11 08:23:42.606869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.082 qpair failed and we were unable to recover it. 00:31:12.083 [2024-06-11 08:23:42.607227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-06-11 08:23:42.607566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-06-11 08:23:42.607593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-06-11 08:23:42.607961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-06-11 08:23:42.608324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-06-11 08:23:42.608351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-06-11 08:23:42.608708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-06-11 08:23:42.609036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-06-11 08:23:42.609062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-06-11 08:23:42.609423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-06-11 08:23:42.609763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-06-11 08:23:42.609790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.083 qpair failed and we were unable to recover it. 00:31:12.083 [2024-06-11 08:23:42.610145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-06-11 08:23:42.610476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.083 [2024-06-11 08:23:42.610503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-06-11 08:23:42.610870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.611180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.611206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-06-11 08:23:42.611584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.611928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.611954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-06-11 08:23:42.612307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.612640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.612671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-06-11 08:23:42.613029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.613433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.613468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-06-11 08:23:42.613872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.614224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.614249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-06-11 08:23:42.614591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.614959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.614984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-06-11 08:23:42.615421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.615795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.615821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-06-11 08:23:42.616165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.616521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.616549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-06-11 08:23:42.616926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.617287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.617313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-06-11 08:23:42.617670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.618032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.618057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-06-11 08:23:42.618424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.618653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.618680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-06-11 08:23:42.619041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.619397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.619423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-06-11 08:23:42.619797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.620144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.620170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-06-11 08:23:42.620526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.620895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.620921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-06-11 08:23:42.621295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.621677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.621704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-06-11 08:23:42.622081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.622452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.622480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-06-11 08:23:42.622863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.623237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.623263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-06-11 08:23:42.623646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.624012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.624038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-06-11 08:23:42.624486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.624870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.624898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-06-11 08:23:42.625264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.625636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.625663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.084 qpair failed and we were unable to recover it. 00:31:12.084 [2024-06-11 08:23:42.626027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.084 [2024-06-11 08:23:42.626417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.626449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.626690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.627047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.627072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.627452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.627814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.627841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.628208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.628572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.628599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.628956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.629299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.629324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.629727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.630073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.630100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.630471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.630851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.630877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.631278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.631630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.631656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.632028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.632390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.632417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.632805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.633133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.633160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.633519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.633895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.633921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.634288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.634686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.634712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.635080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.635424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.635457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.635826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.636175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.636201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.636569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.636942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.636968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.637350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.637758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.637785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.638142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.638500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.638535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.638793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.639145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.639170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.639541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.639919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.639945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.640307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.640667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.640693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.641061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.641412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.641466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.641738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.642077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.642104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.642469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.642837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.642863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.643221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.643575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.643602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.643965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.644322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.644349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.644599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.644946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.644975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.645341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.645701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.645727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.646040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.646394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.646421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.646779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.647028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.085 [2024-06-11 08:23:42.647056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.085 qpair failed and we were unable to recover it. 00:31:12.085 [2024-06-11 08:23:42.647421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.647781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.647808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.648182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.648539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.648567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.648946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.649370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.649396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.649650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.649988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.650015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.650360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.650575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.650605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.650960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.651299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.651325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.651650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.651989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.652014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.652365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.652712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.652739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.653095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.653431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.653464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.653822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.654198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.654224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.654652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.655009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.655036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.655452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.655797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.655823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.656189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.656555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.656583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.656948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.657322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.657348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.657695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.658051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.658078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.658426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.658819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.658846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.659215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.659556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.659583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.659964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.660316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.660342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.660714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.661086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.661113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.661458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.661793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.661820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.662186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.662539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.662566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.662950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.663324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.663351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.663705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.664038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.664065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.664375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.664734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.664762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.665116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.665476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.665506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.665876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.666217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.666245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.666657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.666912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.666938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.667294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.667629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.667657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.667910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.668280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.086 [2024-06-11 08:23:42.668308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.086 qpair failed and we were unable to recover it. 00:31:12.086 [2024-06-11 08:23:42.668581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.668930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.668962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.669351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.669579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.669609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.670025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.670388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.670416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.670820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.671160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.671187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.671544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.671923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.671950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.672276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.672624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.672652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.672996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.673350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.673375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.673750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.674116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.674142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.674507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.674842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.674868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.675234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.675590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.675620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.675956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.676297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.676324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.676684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.677059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.677085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.677457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.677831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.677859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.678244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.678614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.678641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.679019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.679321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.679346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.679705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.680063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.680090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.680319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.680701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.680730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.681088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.681453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.681481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.681740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.682078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.682105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.682478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.682811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.682837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.683078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.683406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.683433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.683839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.684196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.684222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.684587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.684918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.684945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.685292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.685640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.685668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.686043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.686419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.686453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.686796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.687151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.687177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.687531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.687886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.687912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.688157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.688499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.688527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.087 [2024-06-11 08:23:42.688867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.689223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.087 [2024-06-11 08:23:42.689250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.087 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.689623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.689987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.690013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.690246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.690575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.690602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.690967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.691327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.691354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.691656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.692015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.692042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.692421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.692659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.692689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.693086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.693430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.693467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.693864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.694205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.694232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.694598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.694947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.694975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.695338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.695689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.695718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.696079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.696408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.696434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.696655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.697043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.697069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.697433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.697816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.697843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.698057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.698460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.698488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.698874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.699233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.699259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.699630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.699859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.699886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.700263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.700631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.700658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.700912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.701271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.701299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.701642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.701999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.702027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.702405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.702726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.702754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.703124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.703502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.703531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.703890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.704249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.704276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.704622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.704967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.704994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.705342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.705700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.705728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.088 qpair failed and we were unable to recover it. 00:31:12.088 [2024-06-11 08:23:42.706100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.706456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.088 [2024-06-11 08:23:42.706483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-06-11 08:23:42.706835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-06-11 08:23:42.707185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-06-11 08:23:42.707211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-06-11 08:23:42.707574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-06-11 08:23:42.707950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-06-11 08:23:42.707977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-06-11 08:23:42.708319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-06-11 08:23:42.708557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-06-11 08:23:42.708589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-06-11 08:23:42.708827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-06-11 08:23:42.709150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-06-11 08:23:42.709175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-06-11 08:23:42.709547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-06-11 08:23:42.709903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-06-11 08:23:42.709930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-06-11 08:23:42.710291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-06-11 08:23:42.710536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-06-11 08:23:42.710566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.089 [2024-06-11 08:23:42.710965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-06-11 08:23:42.711312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.089 [2024-06-11 08:23:42.711338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.089 qpair failed and we were unable to recover it. 00:31:12.359 [2024-06-11 08:23:42.711701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.712058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.712087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.359 qpair failed and we were unable to recover it. 00:31:12.359 [2024-06-11 08:23:42.712448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.712782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.712809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.359 qpair failed and we were unable to recover it. 00:31:12.359 [2024-06-11 08:23:42.713178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.713537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.713565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.359 qpair failed and we were unable to recover it. 00:31:12.359 [2024-06-11 08:23:42.714001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.714327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.714353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.359 qpair failed and we were unable to recover it. 00:31:12.359 [2024-06-11 08:23:42.714604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.715018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.715044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.359 qpair failed and we were unable to recover it. 00:31:12.359 [2024-06-11 08:23:42.715354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.715680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.715713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.359 qpair failed and we were unable to recover it. 00:31:12.359 [2024-06-11 08:23:42.716066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.716449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.716476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.359 qpair failed and we were unable to recover it. 00:31:12.359 [2024-06-11 08:23:42.716865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.717220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.717247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.359 qpair failed and we were unable to recover it. 00:31:12.359 [2024-06-11 08:23:42.717587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.717958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.717983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.359 qpair failed and we were unable to recover it. 00:31:12.359 [2024-06-11 08:23:42.718358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.718710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.718738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.359 qpair failed and we were unable to recover it. 00:31:12.359 [2024-06-11 08:23:42.719096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.719472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.719499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.359 qpair failed and we were unable to recover it. 00:31:12.359 [2024-06-11 08:23:42.719850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.720204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.720231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.359 qpair failed and we were unable to recover it. 00:31:12.359 [2024-06-11 08:23:42.720597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.720966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.720992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.359 qpair failed and we were unable to recover it. 00:31:12.359 [2024-06-11 08:23:42.721393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.721772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.721800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.359 qpair failed and we were unable to recover it. 00:31:12.359 [2024-06-11 08:23:42.722165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.722530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.722557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.359 qpair failed and we were unable to recover it. 00:31:12.359 [2024-06-11 08:23:42.722922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.723282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.723315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.359 qpair failed and we were unable to recover it. 00:31:12.359 [2024-06-11 08:23:42.723581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.723817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.723843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.359 qpair failed and we were unable to recover it. 00:31:12.359 [2024-06-11 08:23:42.724211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.724557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.359 [2024-06-11 08:23:42.724584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.359 qpair failed and we were unable to recover it. 00:31:12.359 [2024-06-11 08:23:42.724956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.725317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.725344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.725724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.726060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.726086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.726457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.726782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.726808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.727183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.727536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.727563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.727773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.728149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.728176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.728387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.728745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.728773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.728995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.729354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.729380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.729741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.730119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.730159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.730513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.730857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.730882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.731235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.731597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.731624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.731841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.732210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.732236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.732603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.732981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.733007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.733378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.733730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.733757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.734126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.734485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.734512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.734878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.735242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.735268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.735617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.736003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.736029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.736393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.736753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.736780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.737126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.737368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.737393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.737744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.738087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.738113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.738486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.738856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.738882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.739254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.739601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.739629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.739988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.740314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.740340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.740552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.740917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.740943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.741309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.741669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.741697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.742061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.742405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.742432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.742809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.743165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.743194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.743481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.743844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.743872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.744124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.744480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.744509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.360 qpair failed and we were unable to recover it. 00:31:12.360 [2024-06-11 08:23:42.744918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.360 [2024-06-11 08:23:42.745302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.745329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.745676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.746027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.746054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.746428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.746779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.746806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.747160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.747519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.747545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.747910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.748283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.748309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.748675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.749005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.749031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.749389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.749749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.749775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.750142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.750375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.750404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.750737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.751076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.751103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.751478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.751846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.751873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.752253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.752598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.752626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.752963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.753322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.753348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.753698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.754046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.754073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.754279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.754657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.754683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.754910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.755298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.755325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.755678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.756020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.756046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.756378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.756731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.756758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.757133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.757567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.757594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.757927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.758277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.758302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.758649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.759036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.759061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.759314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.759562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.759587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.759971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.760325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.760351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.760727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.761070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.761097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.761480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.761875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.761902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.762146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.762501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.762530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.762741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.763085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.763112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.763329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.763667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.763695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.763911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.764158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.764186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.764570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.764934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.361 [2024-06-11 08:23:42.764960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.361 qpair failed and we were unable to recover it. 00:31:12.361 [2024-06-11 08:23:42.765224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.765564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.765591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.765912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.766276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.766303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.766704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.766938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.766965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.767328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.767740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.767770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.768002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.768365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.768393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.768788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.769141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.769168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.769525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.769907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.769934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.770297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.770714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.770742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.771109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.771337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.771363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.771728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.772099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.772125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.772493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.772873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.772900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.773145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.773397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.773429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.773818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.774180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.774207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.774514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.774890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.774917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.775276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.775679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.775707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.776034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.776387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.776414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.776796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.777034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.777061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.777425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.777825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.777853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.778189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.778636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.778663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.779002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.779227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.779253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.779618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.779844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.779870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.780258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.780607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.780633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.781005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.781361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.781386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.781767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.782127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.782153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.782403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.782788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.782814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.783155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.783503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.783530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.783912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.784250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.784276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.784654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.784883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.784913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.362 qpair failed and we were unable to recover it. 00:31:12.362 [2024-06-11 08:23:42.785220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.362 [2024-06-11 08:23:42.785615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.785643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.786024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.786383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.786410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.786821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.787157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.787183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.787602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.787966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.787993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.788254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.788469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.788499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.788879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.789211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.789237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.789599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.789958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.789984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.790355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.790687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.790714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.791057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.791429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.791467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.791842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.792189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.792216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.792581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.792961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.792987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.793240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.793596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.793623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.794033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.794392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.794419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.794780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.795119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.795145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.795487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.795870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.795896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.796270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.796628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.796656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.797046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.797359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.797388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.797616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.797972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.797998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.798433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.798797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.798824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.799172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.799526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.799552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.799919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.800274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.800300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.800677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.801053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.801080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.801458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.801869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.801895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.802268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.802622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.802649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.363 qpair failed and we were unable to recover it. 00:31:12.363 [2024-06-11 08:23:42.803031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.363 [2024-06-11 08:23:42.803383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.803409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.803769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.804115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.804148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.804516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.804874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.804900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.805302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.805647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.805674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.806045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.806377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.806403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.806821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.807181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.807208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.807573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.807936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.807963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.808325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.808671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.808699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.809055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.809287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.809315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.809688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.810044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.810071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.810436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.810748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.810775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.811137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.811497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.811525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.811919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.812162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.812191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.812534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.812884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.812910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.813299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.813661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.813688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.814064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.814421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.814458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.814818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.815159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.815185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.815600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.815976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.816002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.816256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.816592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.816619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.816878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.817230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.817257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.817622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.817983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.818009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.818373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.818736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.818763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.819132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.819490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.819518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.819892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.820247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.820273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.820654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.820993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.821019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.821387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.821757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.821785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.822143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.822499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.822527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.822875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.823210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.823236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.364 qpair failed and we were unable to recover it. 00:31:12.364 [2024-06-11 08:23:42.823609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.824007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.364 [2024-06-11 08:23:42.824033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.824271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.824600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.824633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.824880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.825229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.825255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.825619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.825982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.826009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.826387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.826763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.826790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.827159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.827517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.827545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.827923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.828268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.828295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.828538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.828895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.828921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.829280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.829518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.829548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.829909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.830270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.830296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.830640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.831007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.831033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.831404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.831834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.831868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.832269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.832627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.832655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.833012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.833381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.833408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.833778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.834138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.834164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.834527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.834889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.834916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.835320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.835662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.835690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.836054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.836416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.836460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.836700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.837081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.837107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.837468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.837834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.837860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.838175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.838539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.838566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.838932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.839289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.839325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.839705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.840048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.840074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.840430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.840780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.840807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.841166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.841533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.841560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.841935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.842336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.842363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.842703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.842996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.843023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.843399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.843767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.843795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.844166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.844524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.844551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.365 qpair failed and we were unable to recover it. 00:31:12.365 [2024-06-11 08:23:42.844927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.365 [2024-06-11 08:23:42.845297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.845323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.845681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.846038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.846064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.846429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.846801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.846827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.847174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.847538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.847565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.847943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.848287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.848313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.848684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.849110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.849135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.849519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.849643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.849667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.850005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.850350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.850376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.850743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.851106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.851134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.851499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.851844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.851870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.852224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.852581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.852608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.852975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.853350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.853375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.853742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.854099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.854124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.854478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.854841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.854868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.855229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.855566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.855593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.856024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.856385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.856412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.856769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.857121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.857148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.857517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.857897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.857924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.858288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.858614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.858641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.859021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.859373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.859399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.859741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.860084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.860110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.860519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.860753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.860781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.861145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.861476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.861504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.861857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.862082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.862112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.862497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.862745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.862775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.863137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.863418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.863454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.863833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.864183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.864209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.864604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.864962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.864989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.865366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.865693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.366 [2024-06-11 08:23:42.865721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.366 qpair failed and we were unable to recover it. 00:31:12.366 [2024-06-11 08:23:42.866100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.866458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.866487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.866822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.867192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.867219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.867565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.867925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.867952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.868319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.868682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.868709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.869045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.869413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.869448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.869755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.870109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.870136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.870525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.870863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.870889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.871244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.871603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.871630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.871987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.872354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.872381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.872646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.873025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.873051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.873417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.873801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.873828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.874191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.874547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.874575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.874697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.874951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.874978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.875206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.875547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.875575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.875977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.876303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.876329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.876567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.876927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.876953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.877319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.877677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.877704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.878066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.878409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.878435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.878846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.879174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.879200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.879557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.879951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.879978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.880355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.880723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.880751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.881111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.881470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.881497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.881723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.882099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.882125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.882496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.882858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.882884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.883297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.883686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.883716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.883914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.884234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.884261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.884660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.885041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.885067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.885436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.885807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.885834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.367 [2024-06-11 08:23:42.886091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.886424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.367 [2024-06-11 08:23:42.886462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.367 qpair failed and we were unable to recover it. 00:31:12.368 [2024-06-11 08:23:42.886802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.887147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.887178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.368 qpair failed and we were unable to recover it. 00:31:12.368 [2024-06-11 08:23:42.887536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.887873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.887900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.368 qpair failed and we were unable to recover it. 00:31:12.368 [2024-06-11 08:23:42.888244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.888480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.888510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.368 qpair failed and we were unable to recover it. 00:31:12.368 [2024-06-11 08:23:42.888905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.889247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.889275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.368 qpair failed and we were unable to recover it. 00:31:12.368 [2024-06-11 08:23:42.889515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.889908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.889936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.368 qpair failed and we were unable to recover it. 00:31:12.368 [2024-06-11 08:23:42.890353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.890695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.890723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.368 qpair failed and we were unable to recover it. 00:31:12.368 [2024-06-11 08:23:42.891064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.891421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.891462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.368 qpair failed and we were unable to recover it. 00:31:12.368 [2024-06-11 08:23:42.891831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.892185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.892212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.368 qpair failed and we were unable to recover it. 00:31:12.368 [2024-06-11 08:23:42.892563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.892916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.892942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.368 qpair failed and we were unable to recover it. 00:31:12.368 [2024-06-11 08:23:42.893256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.893487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.893515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.368 qpair failed and we were unable to recover it. 00:31:12.368 [2024-06-11 08:23:42.893859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.894210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.894236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.368 qpair failed and we were unable to recover it. 00:31:12.368 [2024-06-11 08:23:42.894596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.894943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.894971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.368 qpair failed and we were unable to recover it. 00:31:12.368 [2024-06-11 08:23:42.895272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.895629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.895657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.368 qpair failed and we were unable to recover it. 00:31:12.368 [2024-06-11 08:23:42.895882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.896252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.896279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.368 qpair failed and we were unable to recover it. 00:31:12.368 [2024-06-11 08:23:42.896638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.896895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.896922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.368 qpair failed and we were unable to recover it. 00:31:12.368 [2024-06-11 08:23:42.897319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.897641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.897669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.368 qpair failed and we were unable to recover it. 00:31:12.368 [2024-06-11 08:23:42.898001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.898365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.898392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.368 qpair failed and we were unable to recover it. 00:31:12.368 [2024-06-11 08:23:42.898726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.898952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.898983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.368 qpair failed and we were unable to recover it. 00:31:12.368 [2024-06-11 08:23:42.899326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.899672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.899700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.368 qpair failed and we were unable to recover it. 00:31:12.368 [2024-06-11 08:23:42.900051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.900391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.900419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.368 qpair failed and we were unable to recover it. 00:31:12.368 [2024-06-11 08:23:42.900780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.901128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.368 [2024-06-11 08:23:42.901154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-06-11 08:23:42.901572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.901941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.901967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-06-11 08:23:42.902332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.902689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.902717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-06-11 08:23:42.902992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.903233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.903262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-06-11 08:23:42.903615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.904000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.904028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-06-11 08:23:42.904373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.904730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.904759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-06-11 08:23:42.905131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.905504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.905532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-06-11 08:23:42.905881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.906243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.906271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-06-11 08:23:42.906529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.906760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.906788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-06-11 08:23:42.907130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.907480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.907508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-06-11 08:23:42.907861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.908210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.908236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-06-11 08:23:42.908618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.908978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.909004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-06-11 08:23:42.909367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.909711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.909738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-06-11 08:23:42.910099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.910470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.910499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-06-11 08:23:42.910873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.911110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.911139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-06-11 08:23:42.911472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.911832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.911858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-06-11 08:23:42.912200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.912450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.912477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-06-11 08:23:42.912848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.913218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.913244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-06-11 08:23:42.913652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.914014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.914040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-06-11 08:23:42.914306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.914672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.914700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.369 [2024-06-11 08:23:42.914998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.915227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.369 [2024-06-11 08:23:42.915254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.369 qpair failed and we were unable to recover it. 00:31:12.370 [2024-06-11 08:23:42.915659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.916003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.916029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-06-11 08:23:42.916385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.916742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.916770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-06-11 08:23:42.917129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.917363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.917392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-06-11 08:23:42.917740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.918094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.918120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-06-11 08:23:42.918384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.918665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.918693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-06-11 08:23:42.918967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.919358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.919384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-06-11 08:23:42.919760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.920125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.920152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-06-11 08:23:42.920517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.920845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.920871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-06-11 08:23:42.921121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.921484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.921511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-06-11 08:23:42.921880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.922241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.922268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-06-11 08:23:42.922646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.922870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.922896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-06-11 08:23:42.923253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.923609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.923636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-06-11 08:23:42.923970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.924180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.924209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-06-11 08:23:42.924530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.924899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.924925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-06-11 08:23:42.925286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.925646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.925674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-06-11 08:23:42.926035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.926364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.926391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-06-11 08:23:42.926765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.927109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.927134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-06-11 08:23:42.927502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.927857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.927882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-06-11 08:23:42.928200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.928526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.928553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.370 [2024-06-11 08:23:42.928809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.929166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.370 [2024-06-11 08:23:42.929192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.370 qpair failed and we were unable to recover it. 00:31:12.371 [2024-06-11 08:23:42.929557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.929922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.929949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-06-11 08:23:42.930321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.930670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.930697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-06-11 08:23:42.931049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.931430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.931468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-06-11 08:23:42.931770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.932127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.932153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-06-11 08:23:42.932515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.932881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.932913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-06-11 08:23:42.933275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.933660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.933687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-06-11 08:23:42.934058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.934422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.934466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-06-11 08:23:42.934706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.935100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.935126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-06-11 08:23:42.935393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.935750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.935778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-06-11 08:23:42.936129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.936487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.936517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-06-11 08:23:42.936914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.937155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.937190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-06-11 08:23:42.937563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.937902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.937928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-06-11 08:23:42.938267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.938561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.938588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-06-11 08:23:42.938947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.939191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.939219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-06-11 08:23:42.939592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.939945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.939977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-06-11 08:23:42.940344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.940700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.940728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-06-11 08:23:42.941096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.941460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.941487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-06-11 08:23:42.941847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.942263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.942289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-06-11 08:23:42.942667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.943041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.943066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-06-11 08:23:42.943307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.943676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.943703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.371 qpair failed and we were unable to recover it. 00:31:12.371 [2024-06-11 08:23:42.944052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.371 [2024-06-11 08:23:42.944283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.944312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-06-11 08:23:42.944674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.945028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.945054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-06-11 08:23:42.945429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.945768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.945794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-06-11 08:23:42.946103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.946466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.946494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-06-11 08:23:42.946895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.947125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.947160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-06-11 08:23:42.947419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.947794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.947822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-06-11 08:23:42.948205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.948563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.948591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-06-11 08:23:42.948945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.949307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.949333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-06-11 08:23:42.949548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.949924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.949950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-06-11 08:23:42.950200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.950572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.950599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-06-11 08:23:42.950992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.951344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.951371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-06-11 08:23:42.951788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.952110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.952137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-06-11 08:23:42.952517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.952886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.952911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-06-11 08:23:42.953277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.953610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.953637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-06-11 08:23:42.953943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.954296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.954327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-06-11 08:23:42.954570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.954940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.954966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-06-11 08:23:42.955328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.955666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.955701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-06-11 08:23:42.956072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.956308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.956337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-06-11 08:23:42.956734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.957084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.957111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-06-11 08:23:42.957453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.957795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.957821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.372 [2024-06-11 08:23:42.958195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.958554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.372 [2024-06-11 08:23:42.958582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.372 qpair failed and we were unable to recover it. 00:31:12.373 [2024-06-11 08:23:42.958935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.959137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.959165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-06-11 08:23:42.959557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.959900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.959926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-06-11 08:23:42.960278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.960624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.960652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-06-11 08:23:42.961014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.961379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.961405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-06-11 08:23:42.961657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.962046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.962073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-06-11 08:23:42.962431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.962809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.962836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-06-11 08:23:42.963174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.963525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.963553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-06-11 08:23:42.963902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.964265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.964291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-06-11 08:23:42.964672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.965011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.965037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-06-11 08:23:42.965449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.965772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.965798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-06-11 08:23:42.966174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.966520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.966547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-06-11 08:23:42.966924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.967330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.967356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-06-11 08:23:42.967706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.967948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.967977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-06-11 08:23:42.968324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.968597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.968625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-06-11 08:23:42.969004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.969354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.969379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-06-11 08:23:42.969744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.970100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.970127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-06-11 08:23:42.970473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.970815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.970842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-06-11 08:23:42.971214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.971612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.971639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-06-11 08:23:42.972058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.972392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.972417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-06-11 08:23:42.972769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.973001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.373 [2024-06-11 08:23:42.973026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.373 qpair failed and we were unable to recover it. 00:31:12.373 [2024-06-11 08:23:42.973381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.973725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.973753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-06-11 08:23:42.974137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.974371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.974397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-06-11 08:23:42.974786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.975147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.975174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-06-11 08:23:42.975557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.975890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.975916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-06-11 08:23:42.976277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.976605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.976633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-06-11 08:23:42.976990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.977245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.977270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-06-11 08:23:42.977624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.977972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.977998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-06-11 08:23:42.978333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.978479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.978509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-06-11 08:23:42.978864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.979225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.979252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-06-11 08:23:42.979715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.980049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.980076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-06-11 08:23:42.980430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.980797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.980824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-06-11 08:23:42.981179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.981419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.981457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-06-11 08:23:42.981822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.982152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.982178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-06-11 08:23:42.982468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.982744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.982771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-06-11 08:23:42.983137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.983489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.983516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-06-11 08:23:42.983865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.984219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.984244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-06-11 08:23:42.984620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.984972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.984999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.374 qpair failed and we were unable to recover it. 00:31:12.374 [2024-06-11 08:23:42.985354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.985723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.374 [2024-06-11 08:23:42.985750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-06-11 08:23:42.985869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.986246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.986272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-06-11 08:23:42.986632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.986998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.987024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-06-11 08:23:42.987386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.987675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.987703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-06-11 08:23:42.988079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.988355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.988385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-06-11 08:23:42.988735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.989077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.989103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-06-11 08:23:42.989445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.989776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.989803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-06-11 08:23:42.990170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.990528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.990556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-06-11 08:23:42.990905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.991263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.991289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-06-11 08:23:42.991631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.991991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.992017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-06-11 08:23:42.992231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.992493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.992521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-06-11 08:23:42.992893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.993264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.993291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-06-11 08:23:42.993639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.993986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.994012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-06-11 08:23:42.994408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.994748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.994777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-06-11 08:23:42.995157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.995498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.995526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-06-11 08:23:42.995952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.996307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.375 [2024-06-11 08:23:42.996334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.375 qpair failed and we were unable to recover it. 00:31:12.375 [2024-06-11 08:23:42.996684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:42.997026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:42.997056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.643 qpair failed and we were unable to recover it. 00:31:12.643 [2024-06-11 08:23:42.997422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:42.997791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:42.997818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.643 qpair failed and we were unable to recover it. 00:31:12.643 [2024-06-11 08:23:42.998143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:42.998473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:42.998500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.643 qpair failed and we were unable to recover it. 00:31:12.643 [2024-06-11 08:23:42.998844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:42.999204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:42.999231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.643 qpair failed and we were unable to recover it. 00:31:12.643 [2024-06-11 08:23:42.999579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:42.999915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:42.999941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.643 qpair failed and we were unable to recover it. 00:31:12.643 [2024-06-11 08:23:43.000263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.000599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.000627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.643 qpair failed and we were unable to recover it. 00:31:12.643 [2024-06-11 08:23:43.000959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.001182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.001211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.643 qpair failed and we were unable to recover it. 00:31:12.643 [2024-06-11 08:23:43.001573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.001935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.001961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.643 qpair failed and we were unable to recover it. 00:31:12.643 [2024-06-11 08:23:43.002304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.002661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.002688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.643 qpair failed and we were unable to recover it. 00:31:12.643 [2024-06-11 08:23:43.003056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.003408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.003435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.643 qpair failed and we were unable to recover it. 00:31:12.643 [2024-06-11 08:23:43.003801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.004143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.004169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.643 qpair failed and we were unable to recover it. 00:31:12.643 [2024-06-11 08:23:43.004533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.004927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.004952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.643 qpair failed and we were unable to recover it. 00:31:12.643 [2024-06-11 08:23:43.005308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.005555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.005582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.643 qpair failed and we were unable to recover it. 00:31:12.643 [2024-06-11 08:23:43.005964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.006203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.006230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.643 qpair failed and we were unable to recover it. 00:31:12.643 [2024-06-11 08:23:43.006561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.006917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.006944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.643 qpair failed and we were unable to recover it. 00:31:12.643 [2024-06-11 08:23:43.007201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.007614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.007642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.643 qpair failed and we were unable to recover it. 00:31:12.643 [2024-06-11 08:23:43.007894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.008266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.008293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.643 qpair failed and we were unable to recover it. 00:31:12.643 [2024-06-11 08:23:43.008507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.008823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.008851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.643 qpair failed and we were unable to recover it. 00:31:12.643 [2024-06-11 08:23:43.009221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.009460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.643 [2024-06-11 08:23:43.009487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.643 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.009890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.010259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.010286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.010669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.011034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.011060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.011426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.011790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.011818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.012227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.012550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.012579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.012931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.013260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.013286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.013681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.013905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.013933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.014313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.014723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.014751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.015118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.015475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.015510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.015902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.016243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.016269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.016644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.017001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.017028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.017406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.017773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.017800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.018191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.018416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.018465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.018797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.019169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.019198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.019562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.019917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.019943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.020285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.020625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.020654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.020963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.021316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.021342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.021694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.021932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.021961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.022322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.022673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.022701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.022956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.023244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.023269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.023619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.023977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.024003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.024371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.024726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.024754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.025126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.025475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.025503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.025851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.026210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.026236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.026601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.026959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.026986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.027348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.027564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.027595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.027948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.028318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.644 [2024-06-11 08:23:43.028345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.644 qpair failed and we were unable to recover it. 00:31:12.644 [2024-06-11 08:23:43.028709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.029069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.029096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.029473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.029806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.029832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.030195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.030530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.030558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.030942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.031183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.031213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.031586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.031949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.031976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.032334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.032699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.032727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.033067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.033457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.033485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.033833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.034055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.034085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.034476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.034896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.034923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.035272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.035519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.035546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.035911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.036250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.036276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.036622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.036984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.037010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.037376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.037728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.037756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.038103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.038343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.038368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.038639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.039002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.039028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.039372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.039731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.039760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.040116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.040471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.040499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.040858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.041087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.041113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.041476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.041830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.041857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.042240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.042590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.042617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.042987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.043349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.043376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.043749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.044023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.044050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.044412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.044788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.044816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.045176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.045533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.045561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.045996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.046347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.046374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.046766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.047118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.047144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.047516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.047888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.047921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.645 qpair failed and we were unable to recover it. 00:31:12.645 [2024-06-11 08:23:43.048156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.645 [2024-06-11 08:23:43.048504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.048531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.048885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.049255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.049281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.049669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.049883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.049911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.050288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.050627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.050654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.051026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.051381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.051407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.051656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.051936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.051963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.052319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.052669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.052697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.053029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.053254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.053282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.053690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.054108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.054135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.054474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.054841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.054880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.055292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.055630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.055658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.056101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.056468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.056496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.056841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.057207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.057233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.057715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.058082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.058108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.058486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.058873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.058899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.059340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.059659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.059686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.060067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.060424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.060457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.060813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.061043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.061069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.061483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.061857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.061885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.062246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.062479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.062515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.062867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.063213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.063239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.063592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.063950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.063976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.064344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.064703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.064731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.065092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.065420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.065459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.065800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.066163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.066188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.066534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.066885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.066913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.067285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.067627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.646 [2024-06-11 08:23:43.067654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.646 qpair failed and we were unable to recover it. 00:31:12.646 [2024-06-11 08:23:43.067923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.068140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.068169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.068536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.068885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.068911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.069280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.069635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.069684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.070083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.070319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.070344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.070483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.070845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.070872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.071232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.071619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.071647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.071895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.072245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.072272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.072642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.072989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.073014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.073375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.073728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.073755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.073973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.074363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.074389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.074756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.075116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.075143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.075517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.075886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.075913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.076283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.076641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.076669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.077066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.077398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.077425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.077781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.078115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.078141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.078514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.078882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.078908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.079277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.079643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.079671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.079978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.080337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.080363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.080739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.081082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.081108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.081515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.081864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.081890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.082182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.082525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.082553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.082914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.083269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.083296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.083665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.084018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.084044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.084385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.084707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.084734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.085103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.085455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.085481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.085850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.086194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.086220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.647 [2024-06-11 08:23:43.086582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.086936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.647 [2024-06-11 08:23:43.086963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.647 qpair failed and we were unable to recover it. 00:31:12.648 [2024-06-11 08:23:43.087320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.087548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.087577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.648 qpair failed and we were unable to recover it. 00:31:12.648 [2024-06-11 08:23:43.087934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.088285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.088311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.648 qpair failed and we were unable to recover it. 00:31:12.648 [2024-06-11 08:23:43.088737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.089002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.089028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.648 qpair failed and we were unable to recover it. 00:31:12.648 [2024-06-11 08:23:43.089410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.089754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.089782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.648 qpair failed and we were unable to recover it. 00:31:12.648 [2024-06-11 08:23:43.090192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.090555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.090583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.648 qpair failed and we were unable to recover it. 00:31:12.648 [2024-06-11 08:23:43.090962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.091298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.091324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.648 qpair failed and we were unable to recover it. 00:31:12.648 [2024-06-11 08:23:43.091656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.092015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.092041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.648 qpair failed and we were unable to recover it. 00:31:12.648 [2024-06-11 08:23:43.092412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.092767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.092794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.648 qpair failed and we were unable to recover it. 00:31:12.648 [2024-06-11 08:23:43.093045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.093405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.093431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.648 qpair failed and we were unable to recover it. 00:31:12.648 [2024-06-11 08:23:43.093813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.094162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.094189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.648 qpair failed and we were unable to recover it. 00:31:12.648 [2024-06-11 08:23:43.094515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.094872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.094898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.648 qpair failed and we were unable to recover it. 00:31:12.648 [2024-06-11 08:23:43.095260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.095622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.095650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.648 qpair failed and we were unable to recover it. 00:31:12.648 [2024-06-11 08:23:43.096090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.096422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.096456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.648 qpair failed and we were unable to recover it. 00:31:12.648 [2024-06-11 08:23:43.096815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.097162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.097188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.648 qpair failed and we were unable to recover it. 00:31:12.648 [2024-06-11 08:23:43.097540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.097907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.097934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.648 qpair failed and we were unable to recover it. 00:31:12.648 [2024-06-11 08:23:43.098296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.098623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.098651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.648 qpair failed and we were unable to recover it. 00:31:12.648 [2024-06-11 08:23:43.099029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.099380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.099407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.648 qpair failed and we were unable to recover it. 00:31:12.648 [2024-06-11 08:23:43.099624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.099985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.100012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.648 qpair failed and we were unable to recover it. 00:31:12.648 [2024-06-11 08:23:43.100230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.100578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.648 [2024-06-11 08:23:43.100605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.648 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.100832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.101186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.101213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.101602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.101977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.102004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.102370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.102716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.102743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.102999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.103343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.103369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.103791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.104034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.104061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.104459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.104797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.104823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.105184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.105576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.105603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.105993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.106348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.106374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.106745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.107106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.107132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.107509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.107732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.107758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.107992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.108238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.108264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.108627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.108964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.108989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.109345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.109717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.109744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.110070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.110413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.110453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.110798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.111150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.111178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.111562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.111895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.111921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.112283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.112621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.112648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.112985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.113361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.113387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.113719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.114079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.114105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.114476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.114831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.114857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.115221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.115564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.115592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.115849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.116074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.116104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.116490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.116846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.116873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.117244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.117606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.117633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.117985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.118342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.118368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.118708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.119061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.119087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.119465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.119805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.119832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.649 [2024-06-11 08:23:43.120205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.120657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.649 [2024-06-11 08:23:43.120685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.649 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.121047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.121287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.121312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.121608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.121978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.122004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.122369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.122701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.122728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.123096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.123456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.123484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.123848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.124206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.124232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.124605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.124955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.124980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.125178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.125407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.125436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.125786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.126134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.126160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.126523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.126886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.126912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.127285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.127521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.127552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.127728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.127983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.128009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.128387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.128812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.128839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.129203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.129566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.129593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.129949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.130292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.130318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.130671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.131030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.131057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.131429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.131748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.131774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.132146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.132499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.132526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.132861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.133210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.133237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.133591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.133927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.133953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.134310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.134631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.134659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.134999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.135356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.135382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.135622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.135983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.136008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.136378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.136739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.136765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.137145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.137512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.137540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.137827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.138167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.138194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.138623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.138946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.138972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.139223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.139581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.139608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.139978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.140332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.140358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.650 qpair failed and we were unable to recover it. 00:31:12.650 [2024-06-11 08:23:43.140721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.650 [2024-06-11 08:23:43.141083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.141109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.141458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.141781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.141807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.142193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.142527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.142555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.142926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.143263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.143288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.143668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.144027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.144053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.144421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.144766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.144793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.145146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.145476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.145503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.145909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.146274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.146301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.146664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.147092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.147119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.147500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.147866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.147893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.148265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.148613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.148641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.148980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.149335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.149361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.149727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.150087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.150114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.150354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.150636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.150663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.150894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.151215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.151241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.151614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.151968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.151995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.152326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.152701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.152728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.153102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.153462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.153489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.153724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.154074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.154100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.154467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.154829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.154855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.155223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.155567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.155594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.155910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.156268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.156300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.156671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.157015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.157041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.157390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.157724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.157751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.158126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.158458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.158486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.158839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.159189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.159214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.159552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.159901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.159927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.160185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.160516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.160543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.160886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.161255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.651 [2024-06-11 08:23:43.161281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.651 qpair failed and we were unable to recover it. 00:31:12.651 [2024-06-11 08:23:43.161668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.162068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.162094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.162342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.162684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.162711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.163061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.163417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.163463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.163874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.164216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.164242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.164499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.164878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.164905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.165275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.165640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.165668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.166020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.166262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.166288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.166658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.167003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.167029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.167359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.167588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.167620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.167994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.168429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.168464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.168810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.169021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.169050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.169402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.169764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.169792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.170185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.170422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.170467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.170822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.171156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.171182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.171583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.171921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.171947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.172312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.172666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.172693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.173054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.173420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.173461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.173814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.174101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.174126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.174496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.174878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.174904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.175252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.175629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.175656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.175993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.176320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.176347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.176681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.177038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.177065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.177422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.177777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.177810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.178171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.178532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.178559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.652 qpair failed and we were unable to recover it. 00:31:12.652 [2024-06-11 08:23:43.178826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.179171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.652 [2024-06-11 08:23:43.179198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.179586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.179951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.179978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.180331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.180675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.180703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.181090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.181470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.181498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.181829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.182186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.182212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.182561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.182906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.182932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.183266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.183622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.183649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.184017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.184386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.184412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.184772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.185107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.185133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.185504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.185906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.185931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.186327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.186565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.186592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.187026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.187370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.187396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.187762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.188121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.188148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.188406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.188672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.188699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.189083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.189458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.189487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.189834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.190187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.190212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.190625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.190977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.191004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.191261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.191616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.191644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.192006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.192385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.192411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.192764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.193122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.193148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.193499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.193836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.193862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.194174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.194535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.194563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.194928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.195330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.195357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.195701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.196068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.196095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.196472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.196835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.196861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.197233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.197579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.197607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.197979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.198319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.198345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.198683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.199070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.199097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.653 qpair failed and we were unable to recover it. 00:31:12.653 [2024-06-11 08:23:43.199466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.653 [2024-06-11 08:23:43.199804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.199830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.200183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.200522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.200549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.200788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.201117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.201142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.201516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.201901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.201927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.202311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.202538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.202568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.202907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.203263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.203290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.203665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.204005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.204030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.204383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.204820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.204846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.205217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.205588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.205616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.205972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.206299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.206325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.206750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.207123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.207149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.207534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.207905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.207932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.208168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.208509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.208538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.208909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.209271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.209298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.209558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.209925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.209951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.210230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.210574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.210602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.210961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.211321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.211348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.211710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.212055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.212081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.212490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.212857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.212883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.213252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.213526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.213553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.213917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.214255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.214281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.214635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.214962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.214988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.215358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.215582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.215612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.215876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.216257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.216283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.216666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.216864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.216893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.217231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.217596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.217625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.217971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.218304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.218330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.218684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.218962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.218989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.219335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.219556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.219585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.654 qpair failed and we were unable to recover it. 00:31:12.654 [2024-06-11 08:23:43.219976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.654 [2024-06-11 08:23:43.220324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.220350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.220762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.221122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.221148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.221529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.221886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.221911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.222257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.222496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.222526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.222902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.223264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.223291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.223661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.224007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.224033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.224391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.224728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.224755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.225101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.225470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.225499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.225853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.226177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.226203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.226581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.226921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.226946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.227311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.227717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.227744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.228132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.228481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.228508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.228890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.229253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.229279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.229637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.230030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.230057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.230403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.230815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.230842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.231197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.231545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.231579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.231910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.232255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.232281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.232623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.232991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.233018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.233261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.233636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.233664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.234032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.234343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.234370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.234732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.235077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.235104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.235485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.235816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.235841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.236220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.236563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.236590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.236947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.237313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.237338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.237573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.237925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.237952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.238080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.238510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.238536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.238872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.239265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.239292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.239667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.239999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.655 [2024-06-11 08:23:43.240025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.655 qpair failed and we were unable to recover it. 00:31:12.655 [2024-06-11 08:23:43.240390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.240742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.240768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.241121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.241532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.241559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.241912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.242268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.242294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.242549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.242691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.242720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.243072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.243437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.243476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.243853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.244084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.244111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.244510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.244841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.244868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.245198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.245588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.245615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.245962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.246342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.246369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.246749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.247107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.247133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.247384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.247780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.247808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.248126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.248485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.248513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.248959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.249330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.249356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.249740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.250075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.250102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.250467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.250817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.250843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.251104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.251337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.251364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.251718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.252082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.252109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.252479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.252845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.252872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.253227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.253589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.253618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.253991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.254353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.254379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.254729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.255085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.255112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.255488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.255833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.255861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.256111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.256499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.256527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.256903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.257264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.257290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.257657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.258018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.258044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.258388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.258737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.258764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.259114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.259455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.259482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.259914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.260236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.260262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.656 [2024-06-11 08:23:43.260521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.260883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.656 [2024-06-11 08:23:43.260909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.656 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.261271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.261634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.261663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.262022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.262361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.262386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.262808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.263162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.263189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.263555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.263902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.263928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.264213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.264562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.264589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.264907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.265275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.265300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.265711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.266071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.266097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.266338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.266676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.266704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.267069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.267431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.267467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.267724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.268113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.268138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.268513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.268884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.268911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.269279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.269625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.269652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.270086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.270434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.270472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.270821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.271047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.271073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.271423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.271787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.271814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.272177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.272412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.272462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.272914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.273268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.273294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.273670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.274012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.274038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.274386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.274713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.274740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.275060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.275422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.275457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.275797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.276152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.276178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.276533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.276895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.276921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.277280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.277514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.277544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.657 [2024-06-11 08:23:43.277997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.278352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.657 [2024-06-11 08:23:43.278378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.657 qpair failed and we were unable to recover it. 00:31:12.658 [2024-06-11 08:23:43.278709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.658 [2024-06-11 08:23:43.279077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.658 [2024-06-11 08:23:43.279103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.658 qpair failed and we were unable to recover it. 00:31:12.658 [2024-06-11 08:23:43.279435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.658 [2024-06-11 08:23:43.279704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.658 [2024-06-11 08:23:43.279741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.658 qpair failed and we were unable to recover it. 00:31:12.658 [2024-06-11 08:23:43.280113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.658 [2024-06-11 08:23:43.280477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.658 [2024-06-11 08:23:43.280506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.658 qpair failed and we were unable to recover it. 00:31:12.658 [2024-06-11 08:23:43.280894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.658 [2024-06-11 08:23:43.281224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.658 [2024-06-11 08:23:43.281251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.658 qpair failed and we were unable to recover it. 00:31:12.658 [2024-06-11 08:23:43.281494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.658 [2024-06-11 08:23:43.281867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.658 [2024-06-11 08:23:43.281894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.658 qpair failed and we were unable to recover it. 00:31:12.658 [2024-06-11 08:23:43.282118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.658 [2024-06-11 08:23:43.282470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.658 [2024-06-11 08:23:43.282499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.658 qpair failed and we were unable to recover it. 00:31:12.658 [2024-06-11 08:23:43.282727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.658 [2024-06-11 08:23:43.283091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.658 [2024-06-11 08:23:43.283117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.658 qpair failed and we were unable to recover it. 00:31:12.926 [2024-06-11 08:23:43.283478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.926 [2024-06-11 08:23:43.283875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.926 [2024-06-11 08:23:43.283903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.926 qpair failed and we were unable to recover it. 00:31:12.926 [2024-06-11 08:23:43.284270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.926 [2024-06-11 08:23:43.284616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.926 [2024-06-11 08:23:43.284643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.926 qpair failed and we were unable to recover it. 00:31:12.926 [2024-06-11 08:23:43.284997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.926 [2024-06-11 08:23:43.285348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.926 [2024-06-11 08:23:43.285375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.926 qpair failed and we were unable to recover it. 00:31:12.926 [2024-06-11 08:23:43.285762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.926 [2024-06-11 08:23:43.286127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.926 [2024-06-11 08:23:43.286154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.926 qpair failed and we were unable to recover it. 00:31:12.926 [2024-06-11 08:23:43.286515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.926 [2024-06-11 08:23:43.286860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.926 [2024-06-11 08:23:43.286893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.926 qpair failed and we were unable to recover it. 00:31:12.926 [2024-06-11 08:23:43.287254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.926 [2024-06-11 08:23:43.287624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.926 [2024-06-11 08:23:43.287651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.926 qpair failed and we were unable to recover it. 00:31:12.926 [2024-06-11 08:23:43.288027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.926 [2024-06-11 08:23:43.288328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.926 [2024-06-11 08:23:43.288353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.926 qpair failed and we were unable to recover it. 00:31:12.926 [2024-06-11 08:23:43.288693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.926 [2024-06-11 08:23:43.289036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.926 [2024-06-11 08:23:43.289062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.926 qpair failed and we were unable to recover it. 00:31:12.926 [2024-06-11 08:23:43.289300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.926 [2024-06-11 08:23:43.289712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.926 [2024-06-11 08:23:43.289739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.926 qpair failed and we were unable to recover it. 00:31:12.926 [2024-06-11 08:23:43.290103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.290456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.290483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.290752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.290976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.291004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.291372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.291701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.291729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.292085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.292455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.292483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.292861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.293195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.293222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.293473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.293858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.293890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.294293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.294658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.294685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.295051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.295384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.295410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.295680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.296063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.296090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.296459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.296712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.296738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.297066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.297410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.297437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.297804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.298138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.298164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.298514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.298890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.298916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.299282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.299613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.299640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.300008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.300240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.300270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.300624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.300984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.301011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.301384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.301744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.301771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.302142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.302524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.302551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.302945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.303307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.303333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.303576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.303980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.304006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.304349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.304702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.304729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.305084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.305457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.305485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.305890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.306227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.306253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.306601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.306981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.307008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.307359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.307692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.307719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.308079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.308447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.308475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.308753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.309111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.309137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.309383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.309709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.927 [2024-06-11 08:23:43.309737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.927 qpair failed and we were unable to recover it. 00:31:12.927 [2024-06-11 08:23:43.310090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.310460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.310488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.310856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.311185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.311211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.311624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.311997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.312024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.312395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.312723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.312751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.313126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.313484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.313512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.313858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.314207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.314233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.314579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.314942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.314969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.315330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.315657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.315684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.315939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.316342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.316368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.316727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.317100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.317126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.317484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.317842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.317868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.318239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.318580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.318608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.318993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.319401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.319426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.319816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.320181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.320206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.320606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.320976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.321002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.321256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.321625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.321653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.322007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.322363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.322389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.322766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.323106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.323133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.323510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.323866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.323892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.324256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.324602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.324630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.325003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.325356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.325382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.325747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.325997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.326022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.326408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.326732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.326760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.327095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.327454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.327481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.327867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.328223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.328249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.328614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.328933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.328959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.329080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.329455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.329482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.329806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.330052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.330082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.928 qpair failed and we were unable to recover it. 00:31:12.928 [2024-06-11 08:23:43.330455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.330802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.928 [2024-06-11 08:23:43.330829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.331191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.331551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.331580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.331956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.332298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.332324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.332667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.333023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.333051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.333421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.333844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.333872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.334095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.334450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.334478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.334832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.335178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.335205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.335431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.335817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.335845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.336190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.336407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.336448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.336846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.337217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.337244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.337619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.337984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.338010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.338372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.338707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.338734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.339122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.339475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.339503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.339862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.340111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.340137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.340381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.340760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.340788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.341142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.341475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.341503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.341847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.342202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.342229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.342593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.342820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.342848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.343272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.343586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.343614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.343975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.344337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.344363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.344752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.345080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.345107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.345355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.345688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.345716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.346068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.346429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.346465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.346694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.347055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.347081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.347437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.347794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.347821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.348189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.348552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.348580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.348934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.349279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.349305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.349676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.349913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.349939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.350173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.350508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.929 [2024-06-11 08:23:43.350535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.929 qpair failed and we were unable to recover it. 00:31:12.929 [2024-06-11 08:23:43.350899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.351256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.351283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.351659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.351913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.351939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.352163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.352530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.352558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.352885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.353099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.353129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.353487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.353846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.353873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.354254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.354601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.354628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.354981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.355332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.355358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.355779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.356122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.356148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.356373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.356766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.356794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.357071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.357464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.357492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.357913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.358257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.358283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.358621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.359021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.359048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.359293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.359644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.359672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.359970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.360181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.360210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.360501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.360866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.360892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.361133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.361503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.361530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.361897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.362254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.362281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.362621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.362973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.363000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.363347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.363705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.363732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.364079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.364464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.364493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.364839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.365195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.365221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.365552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.365926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.365952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.366315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.366700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.366728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.367086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.367315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.367340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.367732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.368077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.368103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.368463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.368802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.368828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.369132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.369490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.369518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.369834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.370186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.370211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.930 qpair failed and we were unable to recover it. 00:31:12.930 [2024-06-11 08:23:43.370578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.930 [2024-06-11 08:23:43.370957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.370983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.371364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.371692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.371719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.372080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.372451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.372478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.372859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.373191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.373218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.373578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.373939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.373965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.374324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.374578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.374604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.374956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.375297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.375324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.375684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.376010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.376036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.376402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.376752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.376779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.377164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.377518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.377545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.377891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.378243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.378268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.378657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.379046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.379072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.379297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.379641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.379670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.379894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.380270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.380296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.380676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.381015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.381048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.381377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.381711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.381738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.381976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.382335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.382362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.382541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.382903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.382930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.383295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.383630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.383657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.383887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.384247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.384274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.384654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.384985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.385011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.385265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.385629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.385657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.386009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.386347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.386373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.931 qpair failed and we were unable to recover it. 00:31:12.931 [2024-06-11 08:23:43.386724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.931 [2024-06-11 08:23:43.387084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.387116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.387476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.387826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.387851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.388276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.388498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.388532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.388900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.389169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.389194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.389550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.389769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.389797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.390048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.390396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.390423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.390774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.391130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.391157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.391522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.391892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.391919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.392288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.392629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.392657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.392990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.393352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.393378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.393723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.394084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.394117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.394476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.394711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.394739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.394997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.395350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.395376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.395745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.396113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.396139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.396503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.396860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.396887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.397125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.397468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.397496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.397817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.398182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.398209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.398570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.398801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.398826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.399176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.399520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.399546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.399930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.400291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.400317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.400747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.401079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.401111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.401474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.401715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.401741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.402115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.402468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.402496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.402874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.403130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.403155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.403530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.403886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.403913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.404256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.404628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.404656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.405008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.405374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.405400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.405663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.406016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.406042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.932 [2024-06-11 08:23:43.406401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.406741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.932 [2024-06-11 08:23:43.406768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.932 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.407144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.407491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.407519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.407868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.408180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.408211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.408466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.408820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.408847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.409112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.409455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.409482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.409833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.410149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.410174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.410529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.410891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.410918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.411288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.411658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.411685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.412047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.412408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.412434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.412716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.413065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.413091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.413496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.413745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.413771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.414073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.414414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.414447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.414808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.415170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.415195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.415567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.415904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.415930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.416314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.416679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.416706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.417080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.417422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.417457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.417802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.418088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.418114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.418477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.418817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.418843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.419205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.419605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.419632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.419984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.420342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.420368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.420727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.421109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.421134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.421507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.421843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.421869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.422224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.422583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.422617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.422847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.423202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.423228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.423580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.423928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.423954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.424364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.424709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.424736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.425125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.425487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.425515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.425877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.426214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.426240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.426616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.426957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.426983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.933 qpair failed and we were unable to recover it. 00:31:12.933 [2024-06-11 08:23:43.427355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.933 [2024-06-11 08:23:43.427704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.427733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.428086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.428467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.428495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.428765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.429172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.429198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.429579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.429945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.429972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.430359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.430681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.430708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.431066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.431424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.431461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.431817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.432058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.432093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.432471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.432835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.432868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.433214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.433561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.433588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.434015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.434380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.434406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.434860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.435100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.435131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.435522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.435877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.435903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.436264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.436632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.436659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.437056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.437419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.437452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.437800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.438122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.438148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.438498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.438874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.438901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.439272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.439613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.439640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.439887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.440249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.440274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.440641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.441000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.441028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.441404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.441735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.441761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.442137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.442469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.442496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.442831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.443185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.443211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.443581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.443954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.443981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.444306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.444715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.444743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.445121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.445478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.445506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.445946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.446310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.446336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.446692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.447050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.447076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.447455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.447794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.447820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.934 qpair failed and we were unable to recover it. 00:31:12.934 [2024-06-11 08:23:43.448187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.934 [2024-06-11 08:23:43.448545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.448574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.448921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.449348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.449374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.449813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.450159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.450185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.450601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.450966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.450993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.451243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.451597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.451624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.452003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.452359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.452386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.452766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.453129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.453155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.453529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.453907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.453933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.454300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.454655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.454681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.455042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.455398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.455425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.455759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.456109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.456135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.456478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.456833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.456859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.457087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.457453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.457481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.457829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.458195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.458221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.458477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.458847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.458873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.459221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.459588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.459615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.459995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.460279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.460307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.460676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.461016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.461042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.461392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.461747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.461775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.462160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.462518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.462546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.462994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.463232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.463259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.463630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.463970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.463995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.464337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.464718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.464745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.465108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.465462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.465489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.465879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.466221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.935 [2024-06-11 08:23:43.466247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.935 qpair failed and we were unable to recover it. 00:31:12.935 [2024-06-11 08:23:43.466624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.466925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.466951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.467220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.467554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.467582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.467938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.468306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.468333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.468679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.469031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.469057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.469389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.469757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.469785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.470144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.470504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.470532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.470944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.471307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.471332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.471702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.471931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.471957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.472220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.472582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.472609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.472850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.473218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.473244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.473591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.473950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.473975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.474326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.474663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.474692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.475051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.475426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.475577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.476006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.476342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.476368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.476607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.476876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.476902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.477263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.477623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.477651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.478024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.478364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.478390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.478726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.479086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.479113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.479496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.479825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.479851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.480229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.480582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.480610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.480969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.481319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.481345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.481679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.482035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.482062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.482436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.482796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.482823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.483189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.483535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.483563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.483920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.484281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.484308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.484665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.485015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.485041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.936 [2024-06-11 08:23:43.485410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.485738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.936 [2024-06-11 08:23:43.485765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.936 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.486131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.486471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.486499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.486872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.487234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.487259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.487523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.487915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.487941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.488290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.488653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.488681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.489115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.489489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.489516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.489857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.490212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.490239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.490461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.490843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.490869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.491230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.491559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.491586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.491936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.492165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.492191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.492578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.492912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.492938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.493290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.493638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.493665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.494017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.494380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.494406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.494675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.495032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.495059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.495486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.495887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.495914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.496279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.496645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.496679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.496933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.497296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.497321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.497671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.498033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.498059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.498345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.498680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.498706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.499070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.499385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.499410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.499810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.500159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.500185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.500548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.500919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.500946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.501314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.501712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.501739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.502102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.502464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.502492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.502865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.503211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.503237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.503565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.503898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.503929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.504287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.504695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.504722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.505081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.505402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.937 [2024-06-11 08:23:43.505429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.937 qpair failed and we were unable to recover it. 00:31:12.937 [2024-06-11 08:23:43.505789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.506077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.506104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.506469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.506809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.506835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.507195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.507530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.507556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.507920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.508261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.508287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.508664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.509032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.509058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.509325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.509681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.509707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.510061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.510407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.510433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.510843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.511199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.511231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.511515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.511879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.511905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.512272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.512624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.512653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.513025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.513386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.513412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.513771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.514094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.514120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.514483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.514829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.514855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.515099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.515460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.515488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.515854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.516215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.516241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.516593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.516818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.516847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.517212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.517561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.517588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.517958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.518300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.518332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.518686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.519022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.519049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.519457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.519819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.519846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.520221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.520552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.520580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.520952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.521297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.521322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.521716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.522099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.522126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.522387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.522712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.522738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.523098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.523459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.523486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.523932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.524252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.524277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.524658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.524997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.525023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.525390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.525734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.525762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.938 qpair failed and we were unable to recover it. 00:31:12.938 [2024-06-11 08:23:43.526018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.526398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.938 [2024-06-11 08:23:43.526424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.526798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.527164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.527191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.527436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.527798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.527825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.528205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.528571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.528599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.528967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.529300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.529325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.529689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.529918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.529945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.530334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.530671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.530698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.531073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.531431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.531469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.531811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.532144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.532170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.532527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.532890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.532916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.533268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.533532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.533558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.533811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.534166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.534192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.534566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.534922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.534948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.535222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.535556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.535583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.535950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.536336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.536362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.536729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.537082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.537108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.537547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.537917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.537943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.538320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.538675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.538702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.539058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.539291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.539319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.539660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.540031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.540057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.540432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.540781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.540807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.541172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.541524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.541552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.541906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.542238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.542264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.542635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.543012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.543038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.543417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.543784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.543811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.544172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.544526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.544553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.544904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.545267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.545293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.545671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.546003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.546028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.546301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.546674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.939 [2024-06-11 08:23:43.546701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.939 qpair failed and we were unable to recover it. 00:31:12.939 [2024-06-11 08:23:43.547072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.547430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.547468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.547822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.548162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.548189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.548617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.549056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.549082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.549470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.549804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.549830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.550042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.550432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.550471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.550811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.551171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.551197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.551565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.551927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.551954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.552333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.552557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.552585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.552925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.553282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.553308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.553711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.554077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.554104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.554544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.554906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.554932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.555304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.555639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.555665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.556025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.556280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.556306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.556668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.557007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.557033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.557403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.557680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.557707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.558076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.558485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.558513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.558895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.559231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.559257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.559649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.559990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.560016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.560372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.560702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.560730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.561079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.561434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.561474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.561899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.562231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.562257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.562603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.562893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.562920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.563282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.563632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.563659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:12.940 [2024-06-11 08:23:43.564039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.564371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:12.940 [2024-06-11 08:23:43.564397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:12.940 qpair failed and we were unable to recover it. 00:31:13.210 [2024-06-11 08:23:43.564744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.565114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.565140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.210 qpair failed and we were unable to recover it. 00:31:13.210 [2024-06-11 08:23:43.565485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.565706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.565731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.210 qpair failed and we were unable to recover it. 00:31:13.210 [2024-06-11 08:23:43.565965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.566295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.566323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.210 qpair failed and we were unable to recover it. 00:31:13.210 [2024-06-11 08:23:43.566674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.567019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.567046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.210 qpair failed and we were unable to recover it. 00:31:13.210 [2024-06-11 08:23:43.567411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.567702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.567730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.210 qpair failed and we were unable to recover it. 00:31:13.210 [2024-06-11 08:23:43.568089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.568476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.568503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.210 qpair failed and we were unable to recover it. 00:31:13.210 [2024-06-11 08:23:43.568870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.569231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.569258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.210 qpair failed and we were unable to recover it. 00:31:13.210 [2024-06-11 08:23:43.569636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.569860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.569886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.210 qpair failed and we were unable to recover it. 00:31:13.210 [2024-06-11 08:23:43.570259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.570499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.570529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.210 qpair failed and we were unable to recover it. 00:31:13.210 [2024-06-11 08:23:43.570908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.571119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.571148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.210 qpair failed and we were unable to recover it. 00:31:13.210 [2024-06-11 08:23:43.571505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.571855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.571882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.210 qpair failed and we were unable to recover it. 00:31:13.210 [2024-06-11 08:23:43.572256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.572596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.572623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.210 qpair failed and we were unable to recover it. 00:31:13.210 [2024-06-11 08:23:43.572987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.573342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.573368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.210 qpair failed and we were unable to recover it. 00:31:13.210 [2024-06-11 08:23:43.573749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.574098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.574124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.210 qpair failed and we were unable to recover it. 00:31:13.210 [2024-06-11 08:23:43.574473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.574853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.210 [2024-06-11 08:23:43.574879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.210 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.575239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.575603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.575631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.575970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.576194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.576219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.576563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.576906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.576932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.577309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.577671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.577698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.577925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.578178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.578205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.578514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.578843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.578869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.579239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.579600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.579627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.579989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.580364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.580390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.580795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.581143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.581169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.581571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.581942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.581968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.582395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.582749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.582776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.583120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.583466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.583495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.583839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.584196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.584224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.584585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.584917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.584944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.585368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.585691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.585719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.586072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.586425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.586460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.586834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.587173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.587199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.587552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.587901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.587927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.588275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.588669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.588697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.589057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.589302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.589328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.589721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.590077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.590104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.590542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.590881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.590907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.591231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.591593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.591621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.591881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.592208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.592233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.592625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.592976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.593003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.593369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.593700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.593726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.594080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.594425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.594460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.594768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.594994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.211 [2024-06-11 08:23:43.595020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.211 qpair failed and we were unable to recover it. 00:31:13.211 [2024-06-11 08:23:43.595407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.595735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.595763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.596133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.596495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.596522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.596877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.597120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.597145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.597499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.597836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.597861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.598238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.598609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.598636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.598909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.599188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.599214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.599653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.600007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.600034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.600390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.600753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.600781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.601132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.601377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.601406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.601787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.602138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.602165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.602490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.602769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.602795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.603159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.603516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.603543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.603955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.604348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.604374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.604752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.605084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.605110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.605481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.605836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.605866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.606231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.606567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.606597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.607035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.607392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.607419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.607811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.608169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.608196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.608556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.608882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.608909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.609297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.609631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.609658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.610030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.610374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.610402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.610781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.611120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.611148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.611484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.611816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.611842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.612293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.612616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.612643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.613026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.613390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.613423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.613775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.614126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.614152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.614518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.614893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.614921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.615279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.615627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.615656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.212 [2024-06-11 08:23:43.616037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.616365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.212 [2024-06-11 08:23:43.616391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.212 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.616754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.617120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.617147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.617521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.617882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.617909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.618278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.618638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.618666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.619024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.619398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.619425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.619829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.620148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.620175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.620449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.620821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.620854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.621231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.621467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.621497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.621870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.622200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.622227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.622584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.622939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.622965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.623319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.623656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.623683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.624090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.624419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.624455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.624722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.625082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.625108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.625484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.625769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.625796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.626205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.626571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.626599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.626948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.627288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.627315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.627714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.628074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.628107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.628484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.628843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.628870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.629214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.629562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.629590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.629970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.630203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.630233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.630591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.630943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.630970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.631335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.631674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.631701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.632065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.632424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.632462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.632802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.633150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.633176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.633593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.633978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.634004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.634190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.634598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.634625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.634972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.635334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.635367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.635722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.636065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.636092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.636427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.636784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.636810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.213 qpair failed and we were unable to recover it. 00:31:13.213 [2024-06-11 08:23:43.637074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.213 [2024-06-11 08:23:43.637463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.637491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.637743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.638098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.638127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.638500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.638740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.638767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.639131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.639493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.639522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.639890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.640243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.640271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.640501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.640838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.640866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.641222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.641593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.641621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.641997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.642359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.642386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.642739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.643113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.643139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.643524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.643890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.643919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.644287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.644633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.644661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.645035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.645394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.645422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.645844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.646176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.646202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.646588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.646923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.646957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.647330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.647685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.647713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.648072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.648427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.648464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.648827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.649178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.649204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.649415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.649675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.649702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.650075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.650426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.650477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.650825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.651216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.651242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.651591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.651854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.651880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.652246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.652602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.652629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.652993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.653223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.653250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.653655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.654030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.654058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.654417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.654769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.654797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.655160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.655376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.655404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.655818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.656159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.656185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.214 qpair failed and we were unable to recover it. 00:31:13.214 [2024-06-11 08:23:43.656430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.214 [2024-06-11 08:23:43.656811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.656838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.657205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.657649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.657677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.658027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.658384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.658410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.658803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.659171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.659200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.659573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.659940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.659971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.660376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.660712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.660740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.661160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.661503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.661530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.661898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.662255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.662283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.662638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.662996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.663023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.663395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.663729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.663757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.664180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.664539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.664567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.664943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.665336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.665362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.665750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.666004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.666031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.666313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.666669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.666698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.666950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.667324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.667350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.667715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.668067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.668093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.668461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.668812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.668839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.669200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.669531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.669558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.669936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.670286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.670315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.670653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.671016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.671043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.671396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.671762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.671790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.672035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.672392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.672418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.672769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.673121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.673150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.673526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.673890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.673916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.674289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.674638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.674666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.675031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.675387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.215 [2024-06-11 08:23:43.675415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.215 qpair failed and we were unable to recover it. 00:31:13.215 [2024-06-11 08:23:43.675773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.676123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.676153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.676507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.676874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.676900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.677274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.677604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.677632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.678003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.678357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.678384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.678777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.679121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.679147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.679501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.679836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.679862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.680125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.680490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.680518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.680894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.681252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.681277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.681622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.681978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.682004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.682261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.682483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.682514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.682917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.683286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.683312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.683666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.683988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.684014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.684360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.684730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.684758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.685063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.685466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.685495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.685842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.686186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.686212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.686636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.687001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.687028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.687398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.687764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.687791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.688168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.688536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.688563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.688910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.689265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.689292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.689560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.689901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.689927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.690273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.690594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.690621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.690874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.691235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.691262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.691645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.691988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.692014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.692364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.692723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.692750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.693110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.693466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.693495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.693833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.694181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.694208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.694614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.694986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.695012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.695363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.695694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.695721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.216 [2024-06-11 08:23:43.696079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.696436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.216 [2024-06-11 08:23:43.696474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.216 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.696825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.697159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.697186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.697558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.697905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.697931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.698166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.698528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.698556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.698919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.699297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.699323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.699678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.700028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.700056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.700406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.700779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.700807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.701213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.701567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.701594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.701931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.702260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.702286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.702596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.702958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.702985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.703364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.703737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.703765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.704138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.704478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.704506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.704868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.705206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.705235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.705575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.705909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.705936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.706272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.706616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.706644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.707029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.707394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.707420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.707790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.708146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.708173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.708569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.708933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.708960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.709332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.709674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.709703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.710080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.710434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.710476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.710861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.711212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.711242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.711603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.711960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.711986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.712368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.712723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.712751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.713103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.713322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.713351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.713588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.713972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.713999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.714372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.714599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.714630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.714967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.715327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.715353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.715707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.716040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.716067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.716447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.716863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.217 [2024-06-11 08:23:43.716889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.217 qpair failed and we were unable to recover it. 00:31:13.217 [2024-06-11 08:23:43.717324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.717700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.717727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.718062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.718417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.718454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.718840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.719119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.719145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.719507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.719872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.719902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.720249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.720594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.720621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.720959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.721327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.721353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.721609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.721983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.722010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.722363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.722728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.722756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.723130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.723472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.723500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.723874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.724153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.724182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.724536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.724907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.724934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.725312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.725741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.725769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.726109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.726357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.726386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.726735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.727056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.727082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.727433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.727847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.727874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.728225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.728614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.728645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.728995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.729335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.729362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.729708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.730082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.730109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.730490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.730881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.730914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.731298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.731544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.731572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.731937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.732293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.732319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.732575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.732794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.732823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.733222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.733569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.733596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.733948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.734284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.734310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.734663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.734898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.734927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.735279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.735655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.735682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.736040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.736456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.736484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.736836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.737155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.737181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.218 qpair failed and we were unable to recover it. 00:31:13.218 [2024-06-11 08:23:43.737608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.218 [2024-06-11 08:23:43.737935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.737968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.219 qpair failed and we were unable to recover it. 00:31:13.219 [2024-06-11 08:23:43.738317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.738631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.738660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.219 qpair failed and we were unable to recover it. 00:31:13.219 [2024-06-11 08:23:43.739010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.739352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.739379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.219 qpair failed and we were unable to recover it. 00:31:13.219 [2024-06-11 08:23:43.739727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.740070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.740097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.219 qpair failed and we were unable to recover it. 00:31:13.219 [2024-06-11 08:23:43.740458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.740810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.740837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.219 qpair failed and we were unable to recover it. 00:31:13.219 [2024-06-11 08:23:43.741177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.741516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.741544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.219 qpair failed and we were unable to recover it. 00:31:13.219 [2024-06-11 08:23:43.741903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.742222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.742250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.219 qpair failed and we were unable to recover it. 00:31:13.219 [2024-06-11 08:23:43.742502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.742857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.742896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.219 qpair failed and we were unable to recover it. 00:31:13.219 [2024-06-11 08:23:43.743283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.743652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.743681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.219 qpair failed and we were unable to recover it. 00:31:13.219 [2024-06-11 08:23:43.744061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.744404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.744431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.219 qpair failed and we were unable to recover it. 00:31:13.219 [2024-06-11 08:23:43.744846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.745203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.745238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.219 qpair failed and we were unable to recover it. 00:31:13.219 [2024-06-11 08:23:43.745577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.745998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.746025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.219 qpair failed and we were unable to recover it. 00:31:13.219 [2024-06-11 08:23:43.746386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.746859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.746886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.219 qpair failed and we were unable to recover it. 00:31:13.219 [2024-06-11 08:23:43.747146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.747504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.747532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.219 qpair failed and we were unable to recover it. 00:31:13.219 [2024-06-11 08:23:43.747781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.748101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.748127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.219 qpair failed and we were unable to recover it. 00:31:13.219 [2024-06-11 08:23:43.748489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.748863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.748890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.219 qpair failed and we were unable to recover it. 00:31:13.219 [2024-06-11 08:23:43.749268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.749614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.219 [2024-06-11 08:23:43.749643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.220 qpair failed and we were unable to recover it. 00:31:13.220 [2024-06-11 08:23:43.750021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.750376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.750403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.220 qpair failed and we were unable to recover it. 00:31:13.220 [2024-06-11 08:23:43.750767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.750986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.751015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.220 qpair failed and we were unable to recover it. 00:31:13.220 [2024-06-11 08:23:43.751380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.751783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.751810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.220 qpair failed and we were unable to recover it. 00:31:13.220 [2024-06-11 08:23:43.752185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.752562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.752596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.220 qpair failed and we were unable to recover it. 00:31:13.220 [2024-06-11 08:23:43.752884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.753105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.753136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.220 qpair failed and we were unable to recover it. 00:31:13.220 [2024-06-11 08:23:43.753575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1259581 Killed "${NVMF_APP[@]}" "$@" 00:31:13.220 [2024-06-11 08:23:43.753923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.753950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.220 qpair failed and we were unable to recover it. 00:31:13.220 [2024-06-11 08:23:43.754293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 08:23:43 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:31:13.220 [2024-06-11 08:23:43.754626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.754656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.220 qpair failed and we were unable to recover it. 00:31:13.220 08:23:43 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:13.220 [2024-06-11 08:23:43.755065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 08:23:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:13.220 08:23:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:13.220 [2024-06-11 08:23:43.755461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.755489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.220 qpair failed and we were unable to recover it. 00:31:13.220 08:23:43 -- common/autotest_common.sh@10 -- # set +x 00:31:13.220 [2024-06-11 08:23:43.755883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.756214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.756241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.220 qpair failed and we were unable to recover it. 00:31:13.220 [2024-06-11 08:23:43.756594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.756926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.756954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.220 qpair failed and we were unable to recover it. 00:31:13.220 [2024-06-11 08:23:43.757201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.757552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.757580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.220 qpair failed and we were unable to recover it. 00:31:13.220 [2024-06-11 08:23:43.758003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.758213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.758243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.220 qpair failed and we were unable to recover it. 00:31:13.220 [2024-06-11 08:23:43.758582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.758943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.758977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.220 qpair failed and we were unable to recover it. 00:31:13.220 [2024-06-11 08:23:43.759326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.759689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.759717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.220 qpair failed and we were unable to recover it. 00:31:13.220 [2024-06-11 08:23:43.760074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.760452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.760482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.220 qpair failed and we were unable to recover it. 00:31:13.220 [2024-06-11 08:23:43.760744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.761091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.761119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.220 qpair failed and we were unable to recover it. 00:31:13.220 [2024-06-11 08:23:43.761384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.761724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.761753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.220 qpair failed and we were unable to recover it. 00:31:13.220 [2024-06-11 08:23:43.762118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.762500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.220 [2024-06-11 08:23:43.762529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.220 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.762929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 08:23:43 -- nvmf/common.sh@469 -- # nvmfpid=1260432 00:31:13.221 08:23:43 -- nvmf/common.sh@470 -- # waitforlisten 1260432 00:31:13.221 [2024-06-11 08:23:43.763289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.763319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.763679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 08:23:43 -- common/autotest_common.sh@819 -- # '[' -z 1260432 ']' 00:31:13.221 08:23:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:13.221 [2024-06-11 08:23:43.763807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.763837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 08:23:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.221 [2024-06-11 08:23:43.764188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 08:23:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:13.221 08:23:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.221 [2024-06-11 08:23:43.764485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.764524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 08:23:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:13.221 08:23:43 -- common/autotest_common.sh@10 -- # set +x 00:31:13.221 [2024-06-11 08:23:43.764863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.765194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.765222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.765490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.765768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.765795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.766038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.766427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.766467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.766888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.767263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.767293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.767683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.768107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.768136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.768386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.768723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.768750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.769003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.769320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.769349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.769609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.770016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.770045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.770394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.770762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.770793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.771160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.771514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.771548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.771944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.772308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.772336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.772748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.773115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.773146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.773503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.773895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.773924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.774186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.774536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.774566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.774927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.775287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.775316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.775669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.775990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.776022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.776428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.776702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.776734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.777122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.777530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.777560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.777810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.778182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.778212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.778471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.778876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.778904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.779269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.779623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.779652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.779933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.780275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.780304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.780648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.780869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.780899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.221 qpair failed and we were unable to recover it. 00:31:13.221 [2024-06-11 08:23:43.781160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.221 [2024-06-11 08:23:43.781471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.781501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.781763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.782131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.782160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.782453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.782696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.782725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.783045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.783370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.783398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.783651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.784014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.784042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.784401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.784788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.784817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.785168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.785505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.785533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.785916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.786129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.786163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.786619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.786990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.787016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.787260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.787548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.787575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.787958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.788259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.788286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.788557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.788895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.788922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.789286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.789540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.789569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.789944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.790175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.790203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.790495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.790856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.790903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.791262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.791672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.791700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.791926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.792320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.792347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.792791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.793173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.793201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.793582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.793972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.794001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.794395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.794866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.794895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.795294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.795688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.795716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.795990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.796386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.796417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.796761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.797145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.797172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.797388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.797701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.797730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.798010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.798256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.798283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.798549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.798808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.798835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.799239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.799588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.799616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.799986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.800338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.800363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.800704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.801062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.222 [2024-06-11 08:23:43.801092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.222 qpair failed and we were unable to recover it. 00:31:13.222 [2024-06-11 08:23:43.801326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.801680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.801709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.802064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.802406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.802434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.802934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.803170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.803196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.803464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.803799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.803826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.804108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.804477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.804506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.804852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.805222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.805248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.805525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.805903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.805929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.806304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.806598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.806625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.807014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.807360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.807385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.807695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.808091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.808117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.808491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.808913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.808941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.809190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.809522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.809549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.809814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.810065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.810091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.810490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.810894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.810920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.811297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.811667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.811694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.811934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.812305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.812331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.812696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.813081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.813108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.813503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.813884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.813911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.814290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.814627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.814655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.815006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.815376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.815404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.815778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.816119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.816146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.816563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.816811] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:13.223 [2024-06-11 08:23:43.816842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.816870] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.223 [2024-06-11 08:23:43.816869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.817252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.817589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.817615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.818042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.818410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.818451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.818828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.819210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.819238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.819605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.819961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.819988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.820355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.820698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.820727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.223 [2024-06-11 08:23:43.821090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.821463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.223 [2024-06-11 08:23:43.821492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.223 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.821801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.822051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.822077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.822352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.822589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.822619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.822986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.823355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.823382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.823622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.823898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.823926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.824301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.824673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.824702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.825093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.825460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.825488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.825710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.825941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.825968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.826290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.826638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.826667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.827035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.827403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.827430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.827753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.828122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.828150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.828512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.828750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.828777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.829198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.829558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.829591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.829945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.830289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.830315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.830550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.830908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.830933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.831314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.831672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.831700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.832045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.832404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.832432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.832816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.832944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.832973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.833328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.833531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.833558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.833924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.834254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.834279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.834634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.835007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.835034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.835456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.835612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.835639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.836011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.836380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.836407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.836790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.837044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.837071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.837453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.837807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.837833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.224 [2024-06-11 08:23:43.838204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.838558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.224 [2024-06-11 08:23:43.838586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.224 qpair failed and we were unable to recover it. 00:31:13.225 [2024-06-11 08:23:43.838834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.839164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.839191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.225 qpair failed and we were unable to recover it. 00:31:13.225 [2024-06-11 08:23:43.839419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.839834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.839861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.225 qpair failed and we were unable to recover it. 00:31:13.225 [2024-06-11 08:23:43.840212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.840541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.840568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.225 qpair failed and we were unable to recover it. 00:31:13.225 [2024-06-11 08:23:43.840814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.841160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.841186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.225 qpair failed and we were unable to recover it. 00:31:13.225 [2024-06-11 08:23:43.841536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.841781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.841812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.225 qpair failed and we were unable to recover it. 00:31:13.225 [2024-06-11 08:23:43.842224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.842564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.842592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.225 qpair failed and we were unable to recover it. 00:31:13.225 [2024-06-11 08:23:43.842982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.843348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.843375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.225 qpair failed and we were unable to recover it. 00:31:13.225 [2024-06-11 08:23:43.843763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.844109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.844135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.225 qpair failed and we were unable to recover it. 00:31:13.225 [2024-06-11 08:23:43.844503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.844868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.844895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.225 qpair failed and we were unable to recover it. 00:31:13.225 [2024-06-11 08:23:43.845265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.845602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.845630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.225 qpair failed and we were unable to recover it. 00:31:13.225 [2024-06-11 08:23:43.845865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.846109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.846136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.225 qpair failed and we were unable to recover it. 00:31:13.225 [2024-06-11 08:23:43.846425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.846774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.846802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.225 qpair failed and we were unable to recover it. 00:31:13.225 [2024-06-11 08:23:43.847045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.847382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.847407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.225 qpair failed and we were unable to recover it. 00:31:13.225 [2024-06-11 08:23:43.847801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.848139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.225 [2024-06-11 08:23:43.848166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.225 qpair failed and we were unable to recover it. 00:31:13.495 [2024-06-11 08:23:43.848540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.848916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.848951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.495 qpair failed and we were unable to recover it. 00:31:13.495 [2024-06-11 08:23:43.849216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.849574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.849601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.495 qpair failed and we were unable to recover it. 00:31:13.495 [2024-06-11 08:23:43.849948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.850281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.850309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.495 qpair failed and we were unable to recover it. 00:31:13.495 [2024-06-11 08:23:43.850667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.851041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.851068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.495 qpair failed and we were unable to recover it. 00:31:13.495 [2024-06-11 08:23:43.851460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.851798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.851825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.495 qpair failed and we were unable to recover it. 00:31:13.495 [2024-06-11 08:23:43.852192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.852420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.852458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.495 qpair failed and we were unable to recover it. 00:31:13.495 [2024-06-11 08:23:43.852821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.853159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.495 [2024-06-11 08:23:43.853185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.495 qpair failed and we were unable to recover it. 00:31:13.495 [2024-06-11 08:23:43.853423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.853787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.853816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.495 qpair failed and we were unable to recover it. 00:31:13.495 [2024-06-11 08:23:43.854058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.854387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.854414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.495 qpair failed and we were unable to recover it. 00:31:13.495 [2024-06-11 08:23:43.854789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.855041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.855070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.495 qpair failed and we were unable to recover it. 00:31:13.495 [2024-06-11 08:23:43.855475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.855846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.855880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.495 qpair failed and we were unable to recover it. 00:31:13.495 [2024-06-11 08:23:43.856244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.856627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.856655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.495 qpair failed and we were unable to recover it. 00:31:13.495 [2024-06-11 08:23:43.857041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.857306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.857332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.495 qpair failed and we were unable to recover it. 00:31:13.495 [2024-06-11 08:23:43.857473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.857735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.857764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.495 qpair failed and we were unable to recover it. 00:31:13.495 [2024-06-11 08:23:43.858126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.858338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.858367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.495 qpair failed and we were unable to recover it. 00:31:13.495 [2024-06-11 08:23:43.858731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.859074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.859101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.495 qpair failed and we were unable to recover it. 00:31:13.495 [2024-06-11 08:23:43.859489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.859737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.859763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.495 qpair failed and we were unable to recover it. 00:31:13.495 [2024-06-11 08:23:43.860140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.860323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.860349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.495 qpair failed and we were unable to recover it. 00:31:13.495 [2024-06-11 08:23:43.860596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.860818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.495 [2024-06-11 08:23:43.860845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.495 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.861105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.861460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.861487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.861883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.862240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.862273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.862611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.863002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.863029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.863374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.863614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.863644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.864040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.864372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.864397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.864782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.865154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.865180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.865535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.865909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.865936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.866317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.866536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.866563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.866937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.867300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.867327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.867567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.867915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.867942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.868307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.868675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.868702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.869048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.869378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.869417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.869806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.870005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.870030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.870411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.870820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.870847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.871111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.871471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.871499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.871844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.872218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.872245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.872624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.872988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.873014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.873394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.873734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.873761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.873996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.874201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.874230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.874466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.874705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.874734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.874960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.875281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.875307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.875698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.875942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.875976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.876351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.876735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.876764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.877141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.877501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.877528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.877877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.878229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.878258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.878660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.879023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.879050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.879412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.879762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.879791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.880147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.880502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.880530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.496 qpair failed and we were unable to recover it. 00:31:13.496 [2024-06-11 08:23:43.880742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.496 [2024-06-11 08:23:43.881092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.881118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.881394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.881761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.881789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.882031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.882252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.882281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.882668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.883026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.883054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.883393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.883619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.883652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.884003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.884331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.884358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.884720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.884956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.884982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.885229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.885465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.885494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.885859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.886201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.886227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.886648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.887040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.887066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.887304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.887590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.887616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.887949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.888314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.888340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.888701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.888921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.888950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.889343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.889463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.889491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.889821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.890169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.890196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.890591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.890813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.890838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.891238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.891469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.891497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.891756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.892033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.892059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.892415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.892798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.892827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.893178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.893528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.893556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.893809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.894029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.894059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.894421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.894790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.894819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.895183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.895532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.895559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.895913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.896279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.896306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.896562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.896908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.896935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.897192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.897451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.897483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.897779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.898127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.898153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.898526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.898892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.898920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.899162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.899414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.497 [2024-06-11 08:23:43.899452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.497 qpair failed and we were unable to recover it. 00:31:13.497 [2024-06-11 08:23:43.899864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.900208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.900233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.900650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.900881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.900907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.901272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.901646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.901674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.902039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.902296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.902322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.902703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.903042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.903068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.903453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.903800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.903826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.903939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.904325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.904352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.904706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.904947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.904973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.905390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.905735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.905766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.906146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.906485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.906513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.906760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.907118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.907144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.907514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.907877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.907903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.908251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.908592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.908620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.908950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.909160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.909188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.909421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.909785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.909812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.910185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.910492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.910521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.910548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:13.498 [2024-06-11 08:23:43.910926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.911140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.911172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.911564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.911933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.911960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.912374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.912739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.912768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.913148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.913535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.913563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.913924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.914333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.914360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.914738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.915128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.915156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.915559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.915955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.915983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.916365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.916747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.916777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.917165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.917556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.917586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.917974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.918342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.918369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.918743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.918991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.919022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.919394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.919833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.919863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.498 qpair failed and we were unable to recover it. 00:31:13.498 [2024-06-11 08:23:43.920240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.920635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.498 [2024-06-11 08:23:43.920663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.921057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.921474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.921505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.921905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.922251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.922279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.922630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.923014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.923044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.923477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.923729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.923756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.924121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.924474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.924502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.924915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.925250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.925276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.925551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.925883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.925912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.926171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.926528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.926562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.926948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.927220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.927246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.927605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.927951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.927977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.928336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.928618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.928645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.928897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.929270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.929297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.929692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.930054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.930081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.930466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.930835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.930862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.931214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.931572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.931599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.931860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.932108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.932134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.932488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.932892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.932919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.933130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.933525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.933554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.933923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.934286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.934313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.934677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.935023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.935050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.935401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.935703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.935732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.936089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.936462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.936490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.936895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.937226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.937255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.499 qpair failed and we were unable to recover it. 00:31:13.499 [2024-06-11 08:23:43.937619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.937969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.499 [2024-06-11 08:23:43.937995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.938368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.938561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.938591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.938927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.939279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.939306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.939671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.940058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.940083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.940433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.940806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.940832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.941169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.941528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.941556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.941914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.942302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.942329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.942556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.942930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.942957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.943344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.943692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.943719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.944060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.944450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.944479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.944826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.945170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.945197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.945542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.945909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.945935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.946284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.946671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.946700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.947062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.947403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.947430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.947685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.947940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.947968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.948322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.948698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.948727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.949087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.949329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.949355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.949727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.950054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.950081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.950462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.950830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.950857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.951094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.951467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.951496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.951835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.952178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.952204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.952560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.952938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.952966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.953326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.953702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.953731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.954094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.954460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.954487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.954895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.955264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.955292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.955680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.955887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.955913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.956318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.956693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.956720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.957069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.957310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.957341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.500 qpair failed and we were unable to recover it. 00:31:13.500 [2024-06-11 08:23:43.957691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.958043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.500 [2024-06-11 08:23:43.958070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.958422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.958784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.958811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.959159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.959532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.959562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.959942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.960291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.960317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.960553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.960907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.960934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.961311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.961652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.961680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.962020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.962250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.962277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.962675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.963016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.963044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.963273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.963612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.963640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.963983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.964200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.964227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.964496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.964775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.964802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.965085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.965432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.965483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.965858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.966191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.966219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.966612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.966990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.967017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.967376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.967841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.967868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.968103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.968455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.968483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.968818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.969194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.969220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.969473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.969806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.969833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.970208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.970572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.970601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.970985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.971344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.971372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.971721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.971965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.971991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.972425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.972788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.972816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.973182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.973547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.973576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.973924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.974157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.974185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.974552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.974896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.974921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.975272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.975614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.975642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.976016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.976375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.976401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.976658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.977008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.977035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.977427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.977665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.501 [2024-06-11 08:23:43.977693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.501 qpair failed and we were unable to recover it. 00:31:13.501 [2024-06-11 08:23:43.978039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.978430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.978467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.978756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.978993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.979024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.979387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.979751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.979780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.980137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.980497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.980525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.980905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.981255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.981281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.981502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.981843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.981868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.982107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.982466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.982494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.982848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.983227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.983253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.983599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.983991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.984017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.984459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.984820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.984847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.985215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.985570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.985597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.986011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.986380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.986409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.986781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.987143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.987169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.987570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.987786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.987811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.988070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.988452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.988481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.988732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.989012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.989039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.989386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.989647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.989676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.990048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.990392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.990419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.990793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.991157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.991185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.991559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.991901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.991929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.992293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.992623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.992650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.992893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.993241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.993268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.993670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.994056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.994083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.994478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.994826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.994854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.995222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.995577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.995606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.996047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.996363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.996389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.996644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.997027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.997059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.997423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.997780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.997808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.502 [2024-06-11 08:23:43.998016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.998256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.502 [2024-06-11 08:23:43.998283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.502 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:43.998666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:43.999045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:43.999071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:43.999437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:43.999776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:43.999844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.000258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.000623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.000652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.000993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.001357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.001383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.001766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.002128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.002156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.002527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.002908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.002934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.003316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.003665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.003665] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:13.503 [2024-06-11 08:23:44.003692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.003811] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:13.503 [2024-06-11 08:23:44.003829] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:13.503 [2024-06-11 08:23:44.003837] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:13.503 [2024-06-11 08:23:44.004008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.004048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:13.503 [2024-06-11 08:23:44.004202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:13.503 [2024-06-11 08:23:44.004461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.004492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 [2024-06-11 08:23:44.004369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.004370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:13.503 [2024-06-11 08:23:44.004913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.005250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.005277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.005651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.006038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.006064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.006339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.006611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.006639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.006910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.007332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.007360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.007684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.007942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.007969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.008349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.008777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.008805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.009190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.009558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.009587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.009958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.010334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.010367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.010781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.011152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.011179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.011561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.011933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.011960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.012370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.012627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.012654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.013083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.013378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.013405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.013638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.014003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.014030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.014399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.014776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.014804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.015153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.015520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.015548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.015938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.016259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.016285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.016673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.016897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.016926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.017209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.017460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.017494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.503 qpair failed and we were unable to recover it. 00:31:13.503 [2024-06-11 08:23:44.017862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.503 [2024-06-11 08:23:44.018216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.018251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.018489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.018699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.018725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.019009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.019376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.019405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.019668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.020046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.020073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.020425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.020766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.020794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.021054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.021300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.021326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.021592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.021966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.021993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.022370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.022607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.022635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.022978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.023354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.023381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.023617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.023830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.023861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.024214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.024355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.024381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.024724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.024947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.024975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.025212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.025474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.025502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.025965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.026421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.026462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.026683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.026916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.026944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.027377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.027723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.027751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.027994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.028302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.028327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.028703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.029083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.029110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.029497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.029729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.029756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.030074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.030434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.030485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.030863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.031095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.031121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.031504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.031844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.031872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.032226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.032594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.032621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.033059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.033394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.033420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.033712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.033920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.504 [2024-06-11 08:23:44.033948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.504 qpair failed and we were unable to recover it. 00:31:13.504 [2024-06-11 08:23:44.034200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.034549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.034578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.034924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.035248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.035274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.035628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.035874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.035900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.036247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.036588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.036616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.036984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.037216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.037246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.037530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.037876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.037903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.038279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.038627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.038664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.039033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.039291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.039316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.039759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.040018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.040044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.040294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.040543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.040570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.040958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.041327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.041354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.041708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.042003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.042030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.042382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.042720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.042748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.042998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.043232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.043260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.043624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.043713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.043738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.043852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.044231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.044258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.044609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.044964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.044991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.045359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.045778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.045806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.046047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.046422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.046465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.046697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.046954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.046981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.047094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.047366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.047395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.047649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.047908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.047935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.048297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.048538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.048567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.048971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.049314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.049341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.049714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.049957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.049986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.050368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.050572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.050599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.050853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.051225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.051251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.051706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.052066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.505 [2024-06-11 08:23:44.052093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.505 qpair failed and we were unable to recover it. 00:31:13.505 [2024-06-11 08:23:44.052417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.052809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.052837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.053064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.053474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.053502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.053853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.054227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.054255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.054619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.054996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.055023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.055238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.055570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.055599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.055977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.056358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.056386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.056746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.056962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.056988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.057356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.057770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.057798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.058153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.058376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.058407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.058798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.059160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.059188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.059459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.059703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.059730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.060088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.060463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.060490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.060861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.061254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.061281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.061663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.061989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.062015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.062382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.062642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.062669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.063069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.063313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.063342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.063709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.064092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.064119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.064546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.064914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.064942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.065154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.065526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.065554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.065780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.065898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.065923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.066201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.066425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.066474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.066707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.067083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.067110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.067484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.067721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.067748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.068109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.068458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.068486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.068842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.069167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.069193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.069471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.069825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.069851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.070221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.070456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.070483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.070587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.070948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.070974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.506 qpair failed and we were unable to recover it. 00:31:13.506 [2024-06-11 08:23:44.071321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.071668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.506 [2024-06-11 08:23:44.071698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.072145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.072480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.072508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.072621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.072863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.072889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.073234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.073651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.073678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.074050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.074316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.074342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.074559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.074930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.074958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.075320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.075669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.075697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.075955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.076051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.076077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.076330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.076553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.076583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.076826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.077043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.077069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.077465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.077834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.077860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.078251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.078622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.078650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.078884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.078971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.078997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.079216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.079567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.079595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.079965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.080183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.080212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.080595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.080983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.081012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.081431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.081690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.081717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.081948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.082157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.082183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.082549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.082909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.082935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.083300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.083690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.083717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.084074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.084280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.084306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.084688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.085026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.085052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.085411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.085637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.085664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.086057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.086468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.086496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.086746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.086843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.086870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.087244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.087591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.087619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.088055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.088417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.088454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.088857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.089229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.089255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.507 [2024-06-11 08:23:44.089609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.090029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.507 [2024-06-11 08:23:44.090055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.507 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.090492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.090896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.090922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.091140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.091523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.091551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.091899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.092106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.092132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.092241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.092555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.092583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.092917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.093242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.093268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.093614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.093965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.093991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.094383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.094611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.094642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.094997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.095327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.095353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.095615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.095829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.095856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.096172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.096521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.096548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.096917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.097184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.097210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.097549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.097911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.097938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.098316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.098556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.098584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.098964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.099189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.099216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.099608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.099773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.099799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.100042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.100488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.100516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.100738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.101070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.101096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.101464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.101836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.101862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.102245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.102575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.102605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.102948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.103307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.103333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.103600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.104020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.104048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.104428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.104710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.104736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.105121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.105480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.105508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.105888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.106221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.106246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.106594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.106958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.106984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.107352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.107571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.107601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.107985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.108189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.108215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.508 [2024-06-11 08:23:44.108545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.108798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.508 [2024-06-11 08:23:44.108824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.508 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.109201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.109549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.109576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.109941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.110276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.110301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.110689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.111038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.111070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.111309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.111669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.111699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.112114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.112530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.112557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.112835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.113067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.113094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.113515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.113766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.113795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.114007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.114319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.114345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.114711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.115071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.115099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.115195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.115473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.115501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.115846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.116170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.116197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.116576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.116936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.116963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.117054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.117368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.117399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.117767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.118107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.118134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.118365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.118574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.118603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.119036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.119350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.119377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.119794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.120154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.120182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.120578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.120777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.120803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.121024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.121218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.121244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.121457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.121697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.121723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.122093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.122468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.122498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.122871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.123232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.123258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.123678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.123927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.123959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.124215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.124544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.124573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.509 qpair failed and we were unable to recover it. 00:31:13.509 [2024-06-11 08:23:44.124852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.125197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.509 [2024-06-11 08:23:44.125223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.510 qpair failed and we were unable to recover it. 00:31:13.510 [2024-06-11 08:23:44.125591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.510 [2024-06-11 08:23:44.125832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.510 [2024-06-11 08:23:44.125858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.510 qpair failed and we were unable to recover it. 00:31:13.510 [2024-06-11 08:23:44.126225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.510 [2024-06-11 08:23:44.126511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.510 [2024-06-11 08:23:44.126540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.510 qpair failed and we were unable to recover it. 00:31:13.510 [2024-06-11 08:23:44.126928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.510 [2024-06-11 08:23:44.127133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.510 [2024-06-11 08:23:44.127160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.510 qpair failed and we were unable to recover it. 00:31:13.510 [2024-06-11 08:23:44.127511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.510 [2024-06-11 08:23:44.127869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.510 [2024-06-11 08:23:44.127895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.510 qpair failed and we were unable to recover it. 00:31:13.510 [2024-06-11 08:23:44.128266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.510 [2024-06-11 08:23:44.128629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.510 [2024-06-11 08:23:44.128657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.510 qpair failed and we were unable to recover it. 00:31:13.510 [2024-06-11 08:23:44.128896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.510 [2024-06-11 08:23:44.129100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.510 [2024-06-11 08:23:44.129127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.510 qpair failed and we were unable to recover it. 00:31:13.510 [2024-06-11 08:23:44.129483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.510 [2024-06-11 08:23:44.129877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.510 [2024-06-11 08:23:44.129905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.510 qpair failed and we were unable to recover it. 00:31:13.510 [2024-06-11 08:23:44.130288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.510 [2024-06-11 08:23:44.130654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.510 [2024-06-11 08:23:44.130693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.510 qpair failed and we were unable to recover it. 00:31:13.510 [2024-06-11 08:23:44.131040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.510 [2024-06-11 08:23:44.131400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.510 [2024-06-11 08:23:44.131427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.510 qpair failed and we were unable to recover it. 00:31:13.510 [2024-06-11 08:23:44.131825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.510 [2024-06-11 08:23:44.132046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.510 [2024-06-11 08:23:44.132072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.510 qpair failed and we were unable to recover it. 00:31:13.510 [2024-06-11 08:23:44.132528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.510 [2024-06-11 08:23:44.132870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.510 [2024-06-11 08:23:44.132896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.510 qpair failed and we were unable to recover it. 00:31:13.780 [2024-06-11 08:23:44.133109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.133484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.133512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.780 qpair failed and we were unable to recover it. 00:31:13.780 [2024-06-11 08:23:44.133881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.134093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.134119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.780 qpair failed and we were unable to recover it. 00:31:13.780 [2024-06-11 08:23:44.134526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.134752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.134777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.780 qpair failed and we were unable to recover it. 00:31:13.780 [2024-06-11 08:23:44.135140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.135347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.135373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.780 qpair failed and we were unable to recover it. 00:31:13.780 [2024-06-11 08:23:44.135508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.135742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.135768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.780 qpair failed and we were unable to recover it. 00:31:13.780 [2024-06-11 08:23:44.136141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.136354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.136383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.780 qpair failed and we were unable to recover it. 00:31:13.780 [2024-06-11 08:23:44.136775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.137142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.137168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.780 qpair failed and we were unable to recover it. 00:31:13.780 [2024-06-11 08:23:44.137544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.137925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.137951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.780 qpair failed and we were unable to recover it. 00:31:13.780 [2024-06-11 08:23:44.138162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.138564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.138592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.780 qpair failed and we were unable to recover it. 00:31:13.780 [2024-06-11 08:23:44.138958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.139165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.139192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.780 qpair failed and we were unable to recover it. 00:31:13.780 [2024-06-11 08:23:44.139466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.139566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.139593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.780 qpair failed and we were unable to recover it. 00:31:13.780 [2024-06-11 08:23:44.139989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.140356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.140382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.780 qpair failed and we were unable to recover it. 00:31:13.780 [2024-06-11 08:23:44.140593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.140821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.140848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.780 qpair failed and we were unable to recover it. 00:31:13.780 [2024-06-11 08:23:44.141071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.141467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.141496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.780 qpair failed and we were unable to recover it. 00:31:13.780 [2024-06-11 08:23:44.141746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.142147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.142173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.780 qpair failed and we were unable to recover it. 00:31:13.780 [2024-06-11 08:23:44.142400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.142628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.142656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.780 qpair failed and we were unable to recover it. 00:31:13.780 [2024-06-11 08:23:44.142995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.143319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.143347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.780 qpair failed and we were unable to recover it. 00:31:13.780 [2024-06-11 08:23:44.143691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.144062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.144088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.780 qpair failed and we were unable to recover it. 00:31:13.780 [2024-06-11 08:23:44.144474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.144846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.144872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.780 qpair failed and we were unable to recover it. 00:31:13.780 [2024-06-11 08:23:44.145104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.145454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.145483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.780 qpair failed and we were unable to recover it. 00:31:13.780 [2024-06-11 08:23:44.145899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.146125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.780 [2024-06-11 08:23:44.146154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.780 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.146517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.146741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.146771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.147118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.147482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.147510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.147856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.148219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.148247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.148476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.148853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.148880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.149295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.149669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.149697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.150053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.150415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.150455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.150833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.151182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.151208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.151555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.151941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.151968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.152188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.152406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.152433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.152810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.153155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.153183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.153548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.153928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.153954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.154313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.154518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.154546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.154991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.155182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.155207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.155431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.155567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.155593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.156073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.156460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.156489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.156861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.157108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.157135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.157478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.157912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.157939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.158162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.158499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.158526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.158912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.159278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.159304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.159674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.159976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.160002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.160251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.160594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.160622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.160999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.161376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.161403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.161777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.162131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.162159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.162553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.162907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.162933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.163381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.163745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.163772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.164029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.164271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.164297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.164668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.165090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.165116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.165488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.165871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.165897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.781 qpair failed and we were unable to recover it. 00:31:13.781 [2024-06-11 08:23:44.166275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.781 [2024-06-11 08:23:44.166478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.166505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.166856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.167206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.167232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.167593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.167746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.167773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.168109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.168475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.168504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.168854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.169186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.169214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.169464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.169708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.169735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.170115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.170488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.170516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.170760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.171124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.171151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.171378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.171790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.171819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.172037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.172404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.172430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.172808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.173174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.173201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.173385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.173742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.173771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.174012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.174356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.174382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.174751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.174961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.174988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.175199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.175551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.175579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.175802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.176166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.176192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.176569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.176934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.176961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.177222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.177610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.177638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.178006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.178210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.178236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.178592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.178846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.178876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.179227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.179453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.179481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.179771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.180162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.180190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.180539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.180926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.180953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.181294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.181683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.181711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.182075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.182455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.182484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.182839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.183087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.183113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.183332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.183683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.183710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.183926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.184332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.184359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.184738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.185108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.782 [2024-06-11 08:23:44.185135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.782 qpair failed and we were unable to recover it. 00:31:13.782 [2024-06-11 08:23:44.185369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.185631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.185657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.185786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.186162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.186190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.186573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.186782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.186809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.187028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.187378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.187404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.187922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.188262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.188287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.188671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.188886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.188912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.189262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.189464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.189492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.189856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.190208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.190234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.190567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.190915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.190941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.191318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.191772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.191800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.192007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.192377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.192403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.192646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.193009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.193035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.193379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.193485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.193514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.193894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.194136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.194163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.194552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.194910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.194936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.195311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.195658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.195686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.196025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.196380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.196406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.196790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.197157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.197184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.197583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.197788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.197814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.198209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.198552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.198582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.198925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.199287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.199313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.199687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.200053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.200079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.200470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.200733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.200758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.201113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.201464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.201492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.201877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.202252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.202279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.202669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.203005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.203031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.203465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.203865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.203890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.204125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.204318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.204344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.783 [2024-06-11 08:23:44.204693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.205027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.783 [2024-06-11 08:23:44.205055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.783 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.205288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.205654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.205681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.784 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.205929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.206271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.206298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.784 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.206677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.206900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.206926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.784 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.207300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.207674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.207701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.784 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.207952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.208036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.208061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.784 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.208426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.208806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.208833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.784 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.209284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.209602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.209631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.784 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.210000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.210362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.210389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.784 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.210825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.211070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.211099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.784 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.211462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.211827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.211853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.784 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.212225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.212476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.212505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.784 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.212950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.213355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.213382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.784 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.213725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.214048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.214075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.784 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.214414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.214802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.214829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.784 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.214936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.215151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.215179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.784 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.215520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.215933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.215961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.784 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.216329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.216681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.216710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.784 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.217070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.217461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.217488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.784 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.217880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.218258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.218285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.784 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.218633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.219007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.219033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.784 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.219403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.219781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.219814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.784 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.220192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.220522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.220551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.784 qpair failed and we were unable to recover it. 00:31:13.784 [2024-06-11 08:23:44.220959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.221047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.784 [2024-06-11 08:23:44.221073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.221291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.221625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.221653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.222028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.222382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.222409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.222654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.223032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.223059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.223429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.223746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.223773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.224158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.224520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.224548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.224922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.225289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.225315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.225667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.226038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.226064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.226450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.226817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.226849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.227222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.227474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.227504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.227932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.228248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.228274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.228480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.228824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.228852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.229207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.229544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.229572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.229952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.230305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.230332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.230705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.230960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.230986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.231365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.231789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.231816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.231908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.232246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.232272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.232487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.232733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.232760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.233010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.233368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.233405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.233653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.233878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.233904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.234367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.234604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.234632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.234847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.235212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.235238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.235606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.235974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.235999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.236376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.236734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.236763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.237021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.237254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.237281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.237671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.238013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.238040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.238231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.238492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.238520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.238797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.239146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.239173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.239409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.239647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.239679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.785 qpair failed and we were unable to recover it. 00:31:13.785 [2024-06-11 08:23:44.240028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.785 [2024-06-11 08:23:44.240370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.240395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.240788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.241183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.241211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.241610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.242014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.242041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.242127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.242498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.242525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.242918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.243120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.243149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.243499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.243885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.243912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.244120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.244493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.244520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.244914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.245270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.245296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.245526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.245945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.245972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.246339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.246682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.246709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.246945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.247325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.247353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.247706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.248045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.248073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.248455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.248867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.248893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.249267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.249510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.249537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.249941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.250278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.250304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.250554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.250945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.250971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.251226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.251633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.251660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.251912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.252275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.252303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.252574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.252927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.252954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.253166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.253533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.253561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.253954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.254045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.254070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.254312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.254396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.254421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.254759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.255083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.255110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.255321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.255691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.255718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.256057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.256435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.256474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.256832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.257033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.257059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.257386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.257729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.257757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.257973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.258290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.258316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.258683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.259059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.786 [2024-06-11 08:23:44.259087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.786 qpair failed and we were unable to recover it. 00:31:13.786 [2024-06-11 08:23:44.259462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.259710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.259736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.260113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.260485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.260512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.260742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.261145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.261173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.261591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.261946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.261973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.262343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.262552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.262579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.262955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.263308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.263335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.263602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.263989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.264017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.264377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.264621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.264648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.264864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.265092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.265119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.265504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.265766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.265793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.266154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.266521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.266549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.266792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.267154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.267180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.267554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.267921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.267949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.268293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.268716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.268747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.269119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.269500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.269529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.269801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.270150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.270178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.270552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.270930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.270957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.271197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.271372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.271398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.271804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.272147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.272173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.272402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.272773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.272800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.273150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.273516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.273544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.273932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.274369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.274396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.274654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.274873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.274900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.275312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.275663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.275690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.276074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.276428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.276470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.276746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.276986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.277012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.277392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.277788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.277816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.278154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.278523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.278552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.787 [2024-06-11 08:23:44.278809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.279222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.787 [2024-06-11 08:23:44.279250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.787 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.279633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.279847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.279876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.280242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.280614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.280642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.281006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.281365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.281393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.281646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.281851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.281879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.282272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.282637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.282665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.283040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.283416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.283457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.283832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.284041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.284070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.284430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.284826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.284854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.285250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.285628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.285657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.285986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.286351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.286379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.286620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.287016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.287043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.287275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.287628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.287656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.288020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.288261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.288288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.288501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.288745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.288772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.289113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.289341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.289369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.289584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.289971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.289998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.290380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.290719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.290748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.290993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.291209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.291237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.291499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.291900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.291927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.292280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.292669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.292698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.293090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.293432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.293473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.293707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.294073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.294100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.294477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.294839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.294866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.295223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.295470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.295496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.295888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.296135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.296161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.296535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.296877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.296903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.297291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.297504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.297532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.297916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.298319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.298345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.788 qpair failed and we were unable to recover it. 00:31:13.788 [2024-06-11 08:23:44.298687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.299015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.788 [2024-06-11 08:23:44.299040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.299244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.299464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.299492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.299719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.300093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.300121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.300486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.300696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.300721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.301098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.301464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.301494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.301890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.302085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.302114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.302366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.302713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.302742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.303163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.303492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.303520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.303754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.304134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.304162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.304537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.304788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.304813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.305055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.305289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.305315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.305546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.305907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.305933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.306360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.306684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.306712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.306956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.307216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.307243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.307623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.307832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.307858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.308077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.308487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.308516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.308913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.309276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.309303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.309696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.310057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.310084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.310309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.310642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.310670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.311042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.311409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.311435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.311822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.312024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.312049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.312285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.312629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.312664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.313069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.313398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.313424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.313669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.314034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.314060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.314432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.314688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.789 [2024-06-11 08:23:44.314714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.789 qpair failed and we were unable to recover it. 00:31:13.789 [2024-06-11 08:23:44.315106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.315502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.315530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.315902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.316267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.316293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.316525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.316907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.316934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.317297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.317641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.317669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.318054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.318408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.318435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.318866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.319089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.319115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.319348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.319740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.319770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.320024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.320375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.320402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.320627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.321018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.321045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.321259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.321706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.321734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.322101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.322465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.322492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.322872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.323218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.323244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.323461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.323684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.323710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.324057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.324266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.324292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.324669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.325024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.325051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.325278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.325636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.325663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.326036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.326257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.326283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.326675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.327012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.327039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.327398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.327633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.327661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.328041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.328398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.328431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.328683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.329061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.329089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.329476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.329716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.329742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.330028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.330244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.330272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.330538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.330772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.330799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.331177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.331524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.331552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.331933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.332258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.332283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.332501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.332885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.332913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.333334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.333534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.333562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.790 qpair failed and we were unable to recover it. 00:31:13.790 [2024-06-11 08:23:44.333778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.334151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.790 [2024-06-11 08:23:44.334178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.334407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.334587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.334624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.334804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.335039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.335066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.335454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.335818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.335845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.336191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.336430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.336468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.336839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.337190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.337218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.337577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.337923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.337950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.338404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.338655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.338690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.339060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.339413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.339450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.339731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.339942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.339970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.340356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.340729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.340758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.340978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.341336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.341368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.341724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.342092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.342119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.342483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.342720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.342746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.343177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.343515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.343541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.343909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.344281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.344310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.344675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.344884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.344911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.345280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.345623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.345651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.345999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.346356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.346383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.346724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.347055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.347082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.347470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.347891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.347917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.348166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.348538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.348573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.348940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.349156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.349182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.349592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.349955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.349982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.350354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.350587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.350614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.350862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.351281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.351309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.351521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.351891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.351918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.352296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.352667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.352693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.353063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.353298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.353325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.791 qpair failed and we were unable to recover it. 00:31:13.791 [2024-06-11 08:23:44.353695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.791 [2024-06-11 08:23:44.354053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.354079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.354435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.354797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.354823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.355192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.355528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.355557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.355657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.355976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.356003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.356378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.356763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.356790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.357147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.357454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.357483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.357840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.358035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.358063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.358433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.358777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.358803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.359161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.359476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.359513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.359892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.360230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.360257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.360628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.360834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.360861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.361283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.361628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.361655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.362044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.362414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.362453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.362835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.363196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.363222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.363586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.364034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.364060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.364406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.364656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.364684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.365068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.365436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.365486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.365844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.366204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.366230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.366500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.366882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.366910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.367286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.367372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.367396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.367707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.368054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.368081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.368302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.368532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.368564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.368827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.369200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.369227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.369537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.369770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.369797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.370166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.370425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.370460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.370840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.371176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.371202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.371551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.371937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.371963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.372181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.372416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.372458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.792 [2024-06-11 08:23:44.372836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.373187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.792 [2024-06-11 08:23:44.373213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.792 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.373590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.373692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.373721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.374102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.374340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.374366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.374765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.375108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.375134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.375508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.375746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.375776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.376142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.376395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.376421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.376818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.377155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.377181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.377411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.377758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.377786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.378013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.378377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.378404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.378770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.379158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.379184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.379574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.379984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.380011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.380302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.380723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.380751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.381141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.381487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.381516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.381757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.381893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.381921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.382249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.382633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.382663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.383028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.383268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.383296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.383511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.383889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.383916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.384280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.384380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.384410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.384722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.385116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.385143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.385356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.385472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.385500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.385842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.386237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.386265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.386487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.386879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.386905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.387114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.387519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.387547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.387761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.388023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.388051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.388455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.388663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.388689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.388902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.389244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.389270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.389634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.389726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.389749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.793 qpair failed and we were unable to recover it. 00:31:13.793 [2024-06-11 08:23:44.390088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.390469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.793 [2024-06-11 08:23:44.390498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.390866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.391241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.391269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.391628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.391840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.391866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.391962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.392286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.392312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.392671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.392876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.392901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.393288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.393629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.393656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.393754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.394063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.394091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.394361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.394617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.394643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.395018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.395382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.395409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.395774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.396120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.396147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.396549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.396921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.396947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.397307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.397665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.397692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.397911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.398271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.398298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.398656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.399009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.399036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.399403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.399743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.399771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.400160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.400481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.400511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.400903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.401264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.401290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.401661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.401870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.401896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.402257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.402622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.402650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.403004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.403289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.403314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.403685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.404036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.404061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.404455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.404870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.404897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.405232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.405588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.405616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.405957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.406153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.406179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.406550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.406934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.406960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.407309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.407655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.407684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.794 qpair failed and we were unable to recover it. 00:31:13.794 [2024-06-11 08:23:44.408065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.408404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.794 [2024-06-11 08:23:44.408430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.795 qpair failed and we were unable to recover it. 00:31:13.795 [2024-06-11 08:23:44.408766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.408958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.408984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.795 qpair failed and we were unable to recover it. 00:31:13.795 [2024-06-11 08:23:44.409355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.409603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.409635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.795 qpair failed and we were unable to recover it. 00:31:13.795 [2024-06-11 08:23:44.409858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.410223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.410249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.795 qpair failed and we were unable to recover it. 00:31:13.795 [2024-06-11 08:23:44.410618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.410819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.410844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.795 qpair failed and we were unable to recover it. 00:31:13.795 [2024-06-11 08:23:44.411203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.411542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.411570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.795 qpair failed and we were unable to recover it. 00:31:13.795 [2024-06-11 08:23:44.411812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.412183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.412209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.795 qpair failed and we were unable to recover it. 00:31:13.795 [2024-06-11 08:23:44.412578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.412796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.412824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.795 qpair failed and we were unable to recover it. 00:31:13.795 [2024-06-11 08:23:44.413083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.413409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.413435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.795 qpair failed and we were unable to recover it. 00:31:13.795 [2024-06-11 08:23:44.413878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.414228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.414254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.795 qpair failed and we were unable to recover it. 00:31:13.795 [2024-06-11 08:23:44.414495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.414845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.414873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.795 qpair failed and we were unable to recover it. 00:31:13.795 [2024-06-11 08:23:44.415092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.415361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.415388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.795 qpair failed and we were unable to recover it. 00:31:13.795 [2024-06-11 08:23:44.415636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.416002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.416028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.795 qpair failed and we were unable to recover it. 00:31:13.795 [2024-06-11 08:23:44.416367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.416752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.416780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.795 qpair failed and we were unable to recover it. 00:31:13.795 [2024-06-11 08:23:44.417130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.417498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.795 [2024-06-11 08:23:44.417526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:13.795 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.417783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.418007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.418037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.418396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.418828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.418857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.419078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.419457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.419485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.419856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.420220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.420247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.420485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.420852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.420879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.421247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.421612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.421641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.421868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.422234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.422260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.422728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.423098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.423125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.423506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.423882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.423909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.424250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.424499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.424527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.424872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.425211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.425238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.425334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.425723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.425753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.425969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.426184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.426211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.426579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.426956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.426983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.427235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.427579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.427610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.427832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.428056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.428083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.428331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.428722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.428751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.429133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.429465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.429500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.429728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.430101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.430127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.430517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.430763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.430790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.431174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.431429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.431466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.431843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.432209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.432235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.432497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.432878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.432906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.433135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.433505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.433532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.065 qpair failed and we were unable to recover it. 00:31:14.065 [2024-06-11 08:23:44.433902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-06-11 08:23:44.434269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.434295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.434716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.435083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.435110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.435492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.435717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.435743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.435996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.436343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.436382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.436605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.436843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.436869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.437116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.437482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.437509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.437859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.438275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.438301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.438534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.438885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.438911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.439290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.439509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.439536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.439767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.439988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.440017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.440431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.440813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.440841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.441060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.441402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.441428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.441666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.442050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.442075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.442429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.442812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.442844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.443261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.443630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.443660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.444025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.444389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.444415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.444769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.445125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.445151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.445387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.445633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.445661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.446026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.446470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.446497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.446840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.447190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.447216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.447536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.447901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.447927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.448296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.448500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.448526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.448859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.449228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.449255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.449483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.449831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.449863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.450219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.450640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.450667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.451015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.451219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.451245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.451634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.452013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.452040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.452426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.452806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.452834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.453255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.453457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-06-11 08:23:44.453484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.066 qpair failed and we were unable to recover it. 00:31:14.066 [2024-06-11 08:23:44.453875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.454210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.454237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.454597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.454798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.454824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.455200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.455539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.455567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.455952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.456323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.456348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.456703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.456913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.456938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.457199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.457486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.457512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.457749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.457984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.458010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.458247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.458488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.458515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.458880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.459099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.459127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.459506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.459867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.459894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.460264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.460500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.460530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.460915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.461269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.461295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.461498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.461862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.461889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.462255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.462458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.462486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.462857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.463208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.463234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.463489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.463865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.463892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.464134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.464355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.464382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.464733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.465146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.465172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.465536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.465751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.465779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.465992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.466221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.466247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.466612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.466961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.466988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.467372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.467731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.467760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.468054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.468274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.468300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.468672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.468897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.468928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.469295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.469664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.469692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.469921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.470276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.470303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.470531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.470900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.470928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.471298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.471674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.471703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.067 [2024-06-11 08:23:44.472128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.472559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-06-11 08:23:44.472587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.067 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.472827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.473208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.473234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.473611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.473823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.473848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.474217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.474436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.474478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.474703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.475043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.475069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.475271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.475628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.475657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.475981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.476341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.476367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.476740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.476936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.476962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.477214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.477484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.477511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.477756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.478125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.478152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.478523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.478795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.478821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.479030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.479398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.479424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.479792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.480146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.480172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.480507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.480861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.480887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.481231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.481581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.481608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.481956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.482314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.482341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.482696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.483053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.483079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.483470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.483865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.483892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.484266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.484674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.484701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.484816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.485092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.485119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.485473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.485690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.485715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.486013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.486257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.486282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.486505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.486886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.486912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.487259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.487683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.487710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.488090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.488468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.488497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.488871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.489239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.489266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.489620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.489976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.490002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.490366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.490563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.490590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.490824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.491148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.491175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.068 qpair failed and we were unable to recover it. 00:31:14.068 [2024-06-11 08:23:44.491554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.491837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.068 [2024-06-11 08:23:44.491862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.492213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.492567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.492595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.492973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.493259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.493285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.493542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.493916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.493944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.494153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.494516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.494542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.494747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.494953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.494980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.495329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.495679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.495706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.496074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.496434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.496475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.496805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.497150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.497178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.497460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.497738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.497765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.498147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.498493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.498521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.498902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.499269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.499295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.499633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.500009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.500035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.500404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.500647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.500678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.500997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.501369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.501395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.501671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.501894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.501920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.502307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.502674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.502702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.502939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.503298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.503325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.503552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.503926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.503952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.504061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.504451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.504479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.504856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.505053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.505079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.505465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.505686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.505712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.506018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.506235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.506260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.506478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.506843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.069 [2024-06-11 08:23:44.506870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.069 qpair failed and we were unable to recover it. 00:31:14.069 [2024-06-11 08:23:44.507239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.507476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.507503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.507924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.508262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.508289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.508660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.509031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.509059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.509272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.509622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.509650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.509999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.510222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.510249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.510461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.510822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.510849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.511215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.511485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.511512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.511761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.511863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.511889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.511983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.512375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.512402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.512573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.512919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.512945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.513171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.513539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.513566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.513673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.514000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.514027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.514413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.514665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.514693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.515059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.515413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.515459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.515857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.516226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.516253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.516622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.516867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.516893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.517249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.517458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.517485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.517834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.517922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.517946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.518095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.518333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.518361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.518600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.518968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.518994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.519430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.519817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.519844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.520208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.520562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.520590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.520962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.521327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.521354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.521576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.521924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.521951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.522192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.522518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.522546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.522918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.523118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.523145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.523489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.523846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.523873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.524245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.524615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.524642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.070 [2024-06-11 08:23:44.525004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.525226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.070 [2024-06-11 08:23:44.525252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.070 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.525494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.525843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.525870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.526228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.526451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.526478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.526867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.527208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.527235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.527577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.527811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.527838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.528217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.528555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.528585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.528810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.529083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.529112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.529338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.529693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.529721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.530042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.530385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.530412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.530768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.531136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.531163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.531554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.531925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.531951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.532315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.532722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.532751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.533169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.533533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.533560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.533934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.534264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.534291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.534679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.535093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.535121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.535489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.535729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.535756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.536016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.536394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.536432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.536825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.537170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.537197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.537291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.537604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.537633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.537995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.538324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.538351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.538596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.538833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.538862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.539224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.539589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.539617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.539858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.540211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.540237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.540584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.540840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.540866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.541110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.541325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.541351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.541694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.542048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.542076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.542435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.542541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.542573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.542810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.543190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.543216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.543573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.543815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.543843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.071 [2024-06-11 08:23:44.544202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.544410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.071 [2024-06-11 08:23:44.544436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.071 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.544730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.545090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.545118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.545499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.545872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.545899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.546115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.546480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.546508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.546740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.546977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.547004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.547366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.547568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.547594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.547926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.548277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.548303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.548669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.549055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.549087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.549433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.549808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.549835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.549933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.550262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.550290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.550681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.551025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.551051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.551410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.551598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.551625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.552046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.552425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.552462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.552798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.553040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.553068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.553428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.553803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.553830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.554264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.554602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.554630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.554878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.555112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.555139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.555366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.555577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.555611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.555982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.556317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.556346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.556698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.556909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.556936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.557279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.557630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.557658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.558012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.558231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.558261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.558359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.558561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.558589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.558878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.559224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.559252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.559711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.559915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.559942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.560323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.560523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.560550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.560920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.561266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.561293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.561472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.561852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.561880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.562209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.562565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.562593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.072 qpair failed and we were unable to recover it. 00:31:14.072 [2024-06-11 08:23:44.562975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.072 [2024-06-11 08:23:44.563336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.563363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.563626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.563737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.563765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.564020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.564373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.564401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.564772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.564974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.565001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.565382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.565604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.565632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.566001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.566351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.566379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.566754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.567128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.567155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.567376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.567696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.567723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.567931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.568193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.568220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.568567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.568930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.568957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.569332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.569574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.569600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.569953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.570166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.570191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.570450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.570811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.570838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.571204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.571414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.571450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.571836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.572208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.572235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.572492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.572760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.572789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.573158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.573360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.573387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.573650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.573988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.574014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.574410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.574828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.574855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.575213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.575571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.575598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.576001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.576333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.576359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.576719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.576958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.576985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.577347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.577720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.577746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.578111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.578463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.578490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.578883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.579213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.579240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.579589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.579847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.579873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.580089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.580459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.580487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.580857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.581198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.581225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.581503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.581875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.581901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.073 qpair failed and we were unable to recover it. 00:31:14.073 [2024-06-11 08:23:44.582262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.073 [2024-06-11 08:23:44.582655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.582682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.582917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.583298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.583325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.583580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.583932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.583958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.584351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.584574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.584600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.584976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.585366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.585392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.585820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.586169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.586195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.586630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.587011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.587037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.587435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.587617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.587642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.588015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.588321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.588347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.588717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.589080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.589107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.589481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.589823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.589850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.590236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.590602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.590629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.590975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.591337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.591363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.591771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.592118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.592143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.592506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.592744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.592770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.593138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.593352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.593377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.593741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.594122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.594148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.594503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.594897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.594925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.595145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.595375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.595400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.595778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.596118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.596144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.596283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.596626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.596653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.597050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.597415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.597459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.597687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.597918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.597943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.598342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.598716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.598745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.074 [2024-06-11 08:23:44.598985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.599347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.074 [2024-06-11 08:23:44.599374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.074 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.599714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.599928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.599954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.600302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.600527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.600557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.600957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.601331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.601358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.601610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.601826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.601852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.602236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.602483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.602510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.602737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.603104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.603131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 08:23:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:14.075 [2024-06-11 08:23:44.603481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 08:23:44 -- common/autotest_common.sh@852 -- # return 0 00:31:14.075 [2024-06-11 08:23:44.603879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 08:23:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:14.075 [2024-06-11 08:23:44.603905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.604127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 08:23:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:14.075 08:23:44 -- common/autotest_common.sh@10 -- # set +x 00:31:14.075 [2024-06-11 08:23:44.604366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.604392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.604646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.605036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.605063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.605427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.605683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.605710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.606080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.606348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.606373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.606741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.607091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.607118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.607483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.607727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.607753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.608019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.608378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.608413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.608745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.609112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.609140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.609500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.609754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.609784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.610156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.610523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.610550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.610925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.611127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.611153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.611539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.611921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.611948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.612343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.612698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.612726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.613083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.613470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.613500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.613722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.614034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.614062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.614348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.614740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.614769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.614992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.615372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.615398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.615822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.616025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.616056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.616427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.616770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.616800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.617148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.617502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.617530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.075 [2024-06-11 08:23:44.617932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.618170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.075 [2024-06-11 08:23:44.618199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.075 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.618570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.618818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.618846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.619212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.619569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.619598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.619819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.620068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.620097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.620302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.620536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.620563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.620683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.621004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.621030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.621297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.621682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.621711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.621968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.622303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.622336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.622709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.622960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.622989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.623373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.623577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.623604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.623845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.624175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.624203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.624519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.624757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.624786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.625147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.625348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.625375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.625791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.626127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.626155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.626531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.626759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.626784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.627182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.627387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.627413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.627668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.627977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.628004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.628257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.628594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.628628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.628983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.629335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.629362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.629736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.630102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.630130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.630512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.630732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.630758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.631119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.631469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.631497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.631873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.632240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.632268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.632677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.632904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.632930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.633301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.633570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.633597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.633833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.634179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.634206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.634583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.634940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.634968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.635322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.635533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.076 [2024-06-11 08:23:44.635561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.076 qpair failed and we were unable to recover it. 00:31:14.076 [2024-06-11 08:23:44.635799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.636184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.636210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.636650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.636979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.637007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.637217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.637576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.637603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.637856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.638225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.638252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.638481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.638868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.638895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.639113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.639364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.639392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.639771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.640115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.640141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.640509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.640848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.640877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.641128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.641490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.641517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.641799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.642186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.642214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.642572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.642935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.642962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.643063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.643169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.643199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.643495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.643719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.643745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.644128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.644483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.644510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.644884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 08:23:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.077 [2024-06-11 08:23:44.645220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.645248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 08:23:44 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:14.077 [2024-06-11 08:23:44.645638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 08:23:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.077 [2024-06-11 08:23:44.645849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.645876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 08:23:44 -- common/autotest_common.sh@10 -- # set +x 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.646254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.646476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.646503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.646867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.647203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.647231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.647329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.647642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.647670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.648031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.648384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.648410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.648670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.649035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.649062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.649402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.649769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.649796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.650050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.650411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.650453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.650807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.651167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.651194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.651576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.651794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.651820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.652186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.652401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.652427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.652827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.653179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.653205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.077 [2024-06-11 08:23:44.653613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.653981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.077 [2024-06-11 08:23:44.654007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.077 qpair failed and we were unable to recover it. 00:31:14.078 [2024-06-11 08:23:44.654466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.654686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.654712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-06-11 08:23:44.655151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.655508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.655536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-06-11 08:23:44.655761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.655995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.656021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-06-11 08:23:44.656423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.656656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.656683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-06-11 08:23:44.657020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.657354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.657379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-06-11 08:23:44.657712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.658060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.658086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-06-11 08:23:44.658465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.658676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.658703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-06-11 08:23:44.659081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.659454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.659482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-06-11 08:23:44.659696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.660063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.660091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-06-11 08:23:44.660345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.660702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.660731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-06-11 08:23:44.661105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.661461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.661489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-06-11 08:23:44.661945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.662036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.662060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-06-11 08:23:44.662394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.662649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.662677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-06-11 08:23:44.663026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.663362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.663388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-06-11 08:23:44.663749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.664106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.664133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-06-11 08:23:44.664575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.664809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.664834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-06-11 08:23:44.665200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.665560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.665587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-06-11 08:23:44.665957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.666367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.666394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 Malloc0 00:31:14.078 [2024-06-11 08:23:44.666750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.667102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.667129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-06-11 08:23:44.667272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 08:23:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.078 [2024-06-11 08:23:44.667516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.667543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 08:23:44 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:14.078 [2024-06-11 08:23:44.667895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 08:23:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.078 [2024-06-11 08:23:44.668204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 08:23:44 -- common/autotest_common.sh@10 -- # set +x 00:31:14.078 [2024-06-11 08:23:44.668238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-06-11 08:23:44.668595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.668699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.668725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.078 qpair failed and we were unable to recover it. 00:31:14.078 [2024-06-11 08:23:44.668968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.078 [2024-06-11 08:23:44.669387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.669413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-06-11 08:23:44.669820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.670182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.670207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-06-11 08:23:44.670460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.670839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.670866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-06-11 08:23:44.671227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.671448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.671476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-06-11 08:23:44.671908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.672257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.672283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-06-11 08:23:44.672630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.672862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.672889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-06-11 08:23:44.673163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.673528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.673555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-06-11 08:23:44.673906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.673894] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.079 [2024-06-11 08:23:44.674271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.674297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-06-11 08:23:44.674399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.674657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.674684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-06-11 08:23:44.674930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.675280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.675306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-06-11 08:23:44.675686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.676035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.676063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-06-11 08:23:44.676502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.676881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.676908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-06-11 08:23:44.677181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.677558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.677585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-06-11 08:23:44.677811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.678038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.678064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-06-11 08:23:44.678428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.678773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.678799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-06-11 08:23:44.679178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.679521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.679549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-06-11 08:23:44.679932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.680297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.680323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-06-11 08:23:44.680720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.681090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.681116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-06-11 08:23:44.681489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.681752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.681779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-06-11 08:23:44.682117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.682487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.682514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.079 qpair failed and we were unable to recover it. 00:31:14.079 [2024-06-11 08:23:44.682915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 08:23:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.079 [2024-06-11 08:23:44.683277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.079 [2024-06-11 08:23:44.683303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 08:23:44 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:14.080 [2024-06-11 08:23:44.683406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 08:23:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.080 [2024-06-11 08:23:44.683760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.683787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 08:23:44 -- common/autotest_common.sh@10 -- # set +x 00:31:14.080 [2024-06-11 08:23:44.684180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.684408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.684434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.684840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.685214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.685239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.685464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.685852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.685879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.686145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.686491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.686518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.686806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.686912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.686938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.687089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.687427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.687463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.687779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.687990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.688016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.688420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.688786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.688813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.689196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.689561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.689589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.689970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.690162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.690188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.690514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.690891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.690917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.691118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.691464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.691492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.691845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.692210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.692236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.692497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.692728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.692755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.693199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.693575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.693603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.693983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.694332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.694357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.694734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 08:23:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.080 [2024-06-11 08:23:44.695022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.695051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.695262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 08:23:44 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:14.080 [2024-06-11 08:23:44.695617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 08:23:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.080 [2024-06-11 08:23:44.695645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 08:23:44 -- common/autotest_common.sh@10 -- # set +x 00:31:14.080 [2024-06-11 08:23:44.696004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.696382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.696409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.696744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.697089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.697116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.697501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.697601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.697629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.697874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.698247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.698273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.698653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.699013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.699041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.699423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.699667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.699694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.080 qpair failed and we were unable to recover it. 00:31:14.080 [2024-06-11 08:23:44.699933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.080 [2024-06-11 08:23:44.700149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-06-11 08:23:44.700175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.081 qpair failed and we were unable to recover it. 00:31:14.081 [2024-06-11 08:23:44.700404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-06-11 08:23:44.700750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-06-11 08:23:44.700778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.081 qpair failed and we were unable to recover it. 00:31:14.081 [2024-06-11 08:23:44.701148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-06-11 08:23:44.701234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-06-11 08:23:44.701259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.081 qpair failed and we were unable to recover it. 00:31:14.081 [2024-06-11 08:23:44.701490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-06-11 08:23:44.701895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.081 [2024-06-11 08:23:44.701921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.081 qpair failed and we were unable to recover it. 00:31:14.081 [2024-06-11 08:23:44.702286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.346 [2024-06-11 08:23:44.702502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.346 [2024-06-11 08:23:44.702531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.346 qpair failed and we were unable to recover it. 00:31:14.346 [2024-06-11 08:23:44.702862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.346 [2024-06-11 08:23:44.703202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.346 [2024-06-11 08:23:44.703228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.346 qpair failed and we were unable to recover it. 00:31:14.346 [2024-06-11 08:23:44.703653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.346 [2024-06-11 08:23:44.704015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.346 [2024-06-11 08:23:44.704042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.346 qpair failed and we were unable to recover it. 00:31:14.346 [2024-06-11 08:23:44.704297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.346 [2024-06-11 08:23:44.704675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.346 [2024-06-11 08:23:44.704703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.346 qpair failed and we were unable to recover it. 00:31:14.346 [2024-06-11 08:23:44.705072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.346 [2024-06-11 08:23:44.705429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.346 [2024-06-11 08:23:44.705468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.346 qpair failed and we were unable to recover it. 00:31:14.346 [2024-06-11 08:23:44.705727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.346 [2024-06-11 08:23:44.706111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.346 [2024-06-11 08:23:44.706137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.346 qpair failed and we were unable to recover it. 00:31:14.347 [2024-06-11 08:23:44.706509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.706904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.706932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.347 qpair failed and we were unable to recover it. 00:31:14.347 08:23:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.347 [2024-06-11 08:23:44.707324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 08:23:44 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:14.347 [2024-06-11 08:23:44.707569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.707600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.347 qpair failed and we were unable to recover it. 00:31:14.347 08:23:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.347 08:23:44 -- common/autotest_common.sh@10 -- # set +x 00:31:14.347 [2024-06-11 08:23:44.707972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.708334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.708361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.347 qpair failed and we were unable to recover it. 00:31:14.347 [2024-06-11 08:23:44.708711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.709077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.709103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.347 qpair failed and we were unable to recover it. 00:31:14.347 [2024-06-11 08:23:44.709478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.709619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.709644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.347 qpair failed and we were unable to recover it. 00:31:14.347 [2024-06-11 08:23:44.709944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.710141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.710166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.347 qpair failed and we were unable to recover it. 00:31:14.347 [2024-06-11 08:23:44.710539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.710788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.710817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.347 qpair failed and we were unable to recover it. 00:31:14.347 [2024-06-11 08:23:44.711037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.711365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.711392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.347 qpair failed and we were unable to recover it. 00:31:14.347 [2024-06-11 08:23:44.711788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.712130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.712156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.347 qpair failed and we were unable to recover it. 00:31:14.347 [2024-06-11 08:23:44.712521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.712903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.712929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.347 qpair failed and we were unable to recover it. 00:31:14.347 [2024-06-11 08:23:44.713162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.713373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.713407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.347 qpair failed and we were unable to recover it. 00:31:14.347 [2024-06-11 08:23:44.713820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.714062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.347 [2024-06-11 08:23:44.714089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa788000b90 with addr=10.0.0.2, port=4420 00:31:14.347 qpair failed and we were unable to recover it. 00:31:14.347 [2024-06-11 08:23:44.714260] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.347 08:23:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.347 08:23:44 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:14.347 08:23:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.347 08:23:44 -- common/autotest_common.sh@10 -- # set +x 00:31:14.347 [2024-06-11 08:23:44.724941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.347 [2024-06-11 08:23:44.725084] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.347 [2024-06-11 08:23:44.725133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.347 [2024-06-11 08:23:44.725155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.347 [2024-06-11 08:23:44.725174] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.347 [2024-06-11 08:23:44.725226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.347 qpair failed and we were unable to recover it. 00:31:14.347 08:23:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.347 08:23:44 -- host/target_disconnect.sh@58 -- # wait 1259711 00:31:14.347 [2024-06-11 08:23:44.734843] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.347 [2024-06-11 08:23:44.734938] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.347 [2024-06-11 08:23:44.734969] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.347 [2024-06-11 08:23:44.734983] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.347 [2024-06-11 08:23:44.734996] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.347 [2024-06-11 08:23:44.735026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.347 qpair failed and we were unable to recover it. 00:31:14.347 [2024-06-11 08:23:44.744941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.347 [2024-06-11 08:23:44.745009] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.347 [2024-06-11 08:23:44.745033] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.347 [2024-06-11 08:23:44.745043] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.347 [2024-06-11 08:23:44.745052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.347 [2024-06-11 08:23:44.745075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.347 qpair failed and we were unable to recover it. 00:31:14.347 [2024-06-11 08:23:44.754873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.347 [2024-06-11 08:23:44.754941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.347 [2024-06-11 08:23:44.754964] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.347 [2024-06-11 08:23:44.754971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.347 [2024-06-11 08:23:44.754977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.347 [2024-06-11 08:23:44.754993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.347 qpair failed and we were unable to recover it. 00:31:14.347 [2024-06-11 08:23:44.764805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.347 [2024-06-11 08:23:44.764881] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.347 [2024-06-11 08:23:44.764900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.347 [2024-06-11 08:23:44.764907] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.347 [2024-06-11 08:23:44.764913] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.347 [2024-06-11 08:23:44.764929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.347 qpair failed and we were unable to recover it. 00:31:14.347 [2024-06-11 08:23:44.774951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.347 [2024-06-11 08:23:44.775040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.347 [2024-06-11 08:23:44.775060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.347 [2024-06-11 08:23:44.775067] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.347 [2024-06-11 08:23:44.775075] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.347 [2024-06-11 08:23:44.775091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.347 qpair failed and we were unable to recover it. 00:31:14.347 [2024-06-11 08:23:44.784985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.347 [2024-06-11 08:23:44.785059] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.347 [2024-06-11 08:23:44.785078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.347 [2024-06-11 08:23:44.785085] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.348 [2024-06-11 08:23:44.785091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.348 [2024-06-11 08:23:44.785108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.348 qpair failed and we were unable to recover it. 00:31:14.348 [2024-06-11 08:23:44.794999] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.348 [2024-06-11 08:23:44.795093] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.348 [2024-06-11 08:23:44.795112] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.348 [2024-06-11 08:23:44.795119] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.348 [2024-06-11 08:23:44.795130] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.348 [2024-06-11 08:23:44.795146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.348 qpair failed and we were unable to recover it. 00:31:14.348 [2024-06-11 08:23:44.805016] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.348 [2024-06-11 08:23:44.805101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.348 [2024-06-11 08:23:44.805122] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.348 [2024-06-11 08:23:44.805132] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.348 [2024-06-11 08:23:44.805139] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.348 [2024-06-11 08:23:44.805156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.348 qpair failed and we were unable to recover it. 00:31:14.348 [2024-06-11 08:23:44.815056] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.348 [2024-06-11 08:23:44.815117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.348 [2024-06-11 08:23:44.815137] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.348 [2024-06-11 08:23:44.815144] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.348 [2024-06-11 08:23:44.815150] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.348 [2024-06-11 08:23:44.815166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.348 qpair failed and we were unable to recover it. 00:31:14.348 [2024-06-11 08:23:44.825104] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.348 [2024-06-11 08:23:44.825182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.348 [2024-06-11 08:23:44.825202] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.348 [2024-06-11 08:23:44.825209] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.348 [2024-06-11 08:23:44.825215] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.348 [2024-06-11 08:23:44.825232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.348 qpair failed and we were unable to recover it. 00:31:14.348 [2024-06-11 08:23:44.835007] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.348 [2024-06-11 08:23:44.835067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.348 [2024-06-11 08:23:44.835088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.348 [2024-06-11 08:23:44.835095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.348 [2024-06-11 08:23:44.835102] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.348 [2024-06-11 08:23:44.835119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.348 qpair failed and we were unable to recover it. 00:31:14.348 [2024-06-11 08:23:44.845164] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.348 [2024-06-11 08:23:44.845243] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.348 [2024-06-11 08:23:44.845267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.348 [2024-06-11 08:23:44.845275] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.348 [2024-06-11 08:23:44.845281] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.348 [2024-06-11 08:23:44.845299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.348 qpair failed and we were unable to recover it. 00:31:14.348 [2024-06-11 08:23:44.855167] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.348 [2024-06-11 08:23:44.855232] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.348 [2024-06-11 08:23:44.855253] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.348 [2024-06-11 08:23:44.855263] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.348 [2024-06-11 08:23:44.855269] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.348 [2024-06-11 08:23:44.855286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.348 qpair failed and we were unable to recover it. 00:31:14.348 [2024-06-11 08:23:44.865209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.348 [2024-06-11 08:23:44.865269] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.348 [2024-06-11 08:23:44.865288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.348 [2024-06-11 08:23:44.865295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.348 [2024-06-11 08:23:44.865301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.348 [2024-06-11 08:23:44.865318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.348 qpair failed and we were unable to recover it. 00:31:14.348 [2024-06-11 08:23:44.875248] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.348 [2024-06-11 08:23:44.875315] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.348 [2024-06-11 08:23:44.875334] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.348 [2024-06-11 08:23:44.875342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.348 [2024-06-11 08:23:44.875348] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.348 [2024-06-11 08:23:44.875364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.348 qpair failed and we were unable to recover it. 00:31:14.348 [2024-06-11 08:23:44.885288] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.348 [2024-06-11 08:23:44.885365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.348 [2024-06-11 08:23:44.885385] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.348 [2024-06-11 08:23:44.885393] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.348 [2024-06-11 08:23:44.885405] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.348 [2024-06-11 08:23:44.885421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.348 qpair failed and we were unable to recover it. 00:31:14.348 [2024-06-11 08:23:44.895327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.348 [2024-06-11 08:23:44.895387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.348 [2024-06-11 08:23:44.895407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.348 [2024-06-11 08:23:44.895415] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.348 [2024-06-11 08:23:44.895421] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.348 [2024-06-11 08:23:44.895445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.348 qpair failed and we were unable to recover it. 00:31:14.348 [2024-06-11 08:23:44.905346] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.348 [2024-06-11 08:23:44.905419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.348 [2024-06-11 08:23:44.905443] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.348 [2024-06-11 08:23:44.905451] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.348 [2024-06-11 08:23:44.905457] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.348 [2024-06-11 08:23:44.905473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.348 qpair failed and we were unable to recover it. 00:31:14.348 [2024-06-11 08:23:44.915381] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.348 [2024-06-11 08:23:44.915449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.348 [2024-06-11 08:23:44.915469] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.348 [2024-06-11 08:23:44.915476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.348 [2024-06-11 08:23:44.915482] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.348 [2024-06-11 08:23:44.915498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.348 qpair failed and we were unable to recover it. 00:31:14.348 [2024-06-11 08:23:44.925444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.349 [2024-06-11 08:23:44.925524] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.349 [2024-06-11 08:23:44.925543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.349 [2024-06-11 08:23:44.925551] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.349 [2024-06-11 08:23:44.925556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.349 [2024-06-11 08:23:44.925572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.349 qpair failed and we were unable to recover it. 00:31:14.349 [2024-06-11 08:23:44.935480] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.349 [2024-06-11 08:23:44.935550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.349 [2024-06-11 08:23:44.935569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.349 [2024-06-11 08:23:44.935576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.349 [2024-06-11 08:23:44.935582] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.349 [2024-06-11 08:23:44.935598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.349 qpair failed and we were unable to recover it. 00:31:14.349 [2024-06-11 08:23:44.945547] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.349 [2024-06-11 08:23:44.945623] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.349 [2024-06-11 08:23:44.945642] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.349 [2024-06-11 08:23:44.945649] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.349 [2024-06-11 08:23:44.945655] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.349 [2024-06-11 08:23:44.945671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.349 qpair failed and we were unable to recover it. 00:31:14.349 [2024-06-11 08:23:44.955500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.349 [2024-06-11 08:23:44.955566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.349 [2024-06-11 08:23:44.955584] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.349 [2024-06-11 08:23:44.955591] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.349 [2024-06-11 08:23:44.955597] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.349 [2024-06-11 08:23:44.955613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.349 qpair failed and we were unable to recover it. 00:31:14.349 [2024-06-11 08:23:44.965556] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.349 [2024-06-11 08:23:44.965624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.349 [2024-06-11 08:23:44.965642] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.349 [2024-06-11 08:23:44.965649] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.349 [2024-06-11 08:23:44.965655] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.349 [2024-06-11 08:23:44.965671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.349 qpair failed and we were unable to recover it. 00:31:14.349 [2024-06-11 08:23:44.975605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.349 [2024-06-11 08:23:44.975665] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.349 [2024-06-11 08:23:44.975684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.349 [2024-06-11 08:23:44.975698] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.349 [2024-06-11 08:23:44.975704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.349 [2024-06-11 08:23:44.975719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.349 qpair failed and we were unable to recover it. 00:31:14.349 [2024-06-11 08:23:44.985667] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.349 [2024-06-11 08:23:44.985760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.349 [2024-06-11 08:23:44.985781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.349 [2024-06-11 08:23:44.985790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.349 [2024-06-11 08:23:44.985801] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.349 [2024-06-11 08:23:44.985818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.349 qpair failed and we were unable to recover it. 00:31:14.612 [2024-06-11 08:23:44.995536] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.612 [2024-06-11 08:23:44.995638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.612 [2024-06-11 08:23:44.995659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.612 [2024-06-11 08:23:44.995666] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.612 [2024-06-11 08:23:44.995673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.612 [2024-06-11 08:23:44.995688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.612 qpair failed and we were unable to recover it. 00:31:14.612 [2024-06-11 08:23:45.005747] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.612 [2024-06-11 08:23:45.005840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.612 [2024-06-11 08:23:45.005859] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.612 [2024-06-11 08:23:45.005867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.612 [2024-06-11 08:23:45.005873] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.612 [2024-06-11 08:23:45.005889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.612 qpair failed and we were unable to recover it. 00:31:14.612 [2024-06-11 08:23:45.015805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.612 [2024-06-11 08:23:45.015885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.612 [2024-06-11 08:23:45.015903] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.612 [2024-06-11 08:23:45.015911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.612 [2024-06-11 08:23:45.015917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.612 [2024-06-11 08:23:45.015933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.612 qpair failed and we were unable to recover it. 00:31:14.612 [2024-06-11 08:23:45.025837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.612 [2024-06-11 08:23:45.025909] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.613 [2024-06-11 08:23:45.025928] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.613 [2024-06-11 08:23:45.025935] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.613 [2024-06-11 08:23:45.025941] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.613 [2024-06-11 08:23:45.025958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.613 qpair failed and we were unable to recover it. 00:31:14.613 [2024-06-11 08:23:45.035834] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.613 [2024-06-11 08:23:45.035897] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.613 [2024-06-11 08:23:45.035915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.613 [2024-06-11 08:23:45.035922] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.613 [2024-06-11 08:23:45.035928] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.613 [2024-06-11 08:23:45.035944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.613 qpair failed and we were unable to recover it. 00:31:14.613 [2024-06-11 08:23:45.045782] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.613 [2024-06-11 08:23:45.045860] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.613 [2024-06-11 08:23:45.045878] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.613 [2024-06-11 08:23:45.045886] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.613 [2024-06-11 08:23:45.045892] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.613 [2024-06-11 08:23:45.045907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.613 qpair failed and we were unable to recover it. 00:31:14.613 [2024-06-11 08:23:45.055842] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.613 [2024-06-11 08:23:45.055915] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.613 [2024-06-11 08:23:45.055932] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.613 [2024-06-11 08:23:45.055939] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.613 [2024-06-11 08:23:45.055945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.613 [2024-06-11 08:23:45.055961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.613 qpair failed and we were unable to recover it. 00:31:14.613 [2024-06-11 08:23:45.065895] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.613 [2024-06-11 08:23:45.066000] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.613 [2024-06-11 08:23:45.066024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.613 [2024-06-11 08:23:45.066031] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.613 [2024-06-11 08:23:45.066038] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.613 [2024-06-11 08:23:45.066053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.613 qpair failed and we were unable to recover it. 00:31:14.613 [2024-06-11 08:23:45.075880] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.613 [2024-06-11 08:23:45.076018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.613 [2024-06-11 08:23:45.076036] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.613 [2024-06-11 08:23:45.076043] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.613 [2024-06-11 08:23:45.076049] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.613 [2024-06-11 08:23:45.076065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.613 qpair failed and we were unable to recover it. 00:31:14.613 [2024-06-11 08:23:45.085801] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.613 [2024-06-11 08:23:45.085877] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.613 [2024-06-11 08:23:45.085896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.613 [2024-06-11 08:23:45.085903] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.613 [2024-06-11 08:23:45.085909] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.613 [2024-06-11 08:23:45.085924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.613 qpair failed and we were unable to recover it. 00:31:14.613 [2024-06-11 08:23:45.095953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.613 [2024-06-11 08:23:45.096022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.613 [2024-06-11 08:23:45.096040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.613 [2024-06-11 08:23:45.096047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.613 [2024-06-11 08:23:45.096053] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.613 [2024-06-11 08:23:45.096069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.613 qpair failed and we were unable to recover it. 00:31:14.613 [2024-06-11 08:23:45.105989] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.613 [2024-06-11 08:23:45.106057] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.613 [2024-06-11 08:23:45.106075] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.613 [2024-06-11 08:23:45.106082] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.613 [2024-06-11 08:23:45.106089] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.613 [2024-06-11 08:23:45.106104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.613 qpair failed and we were unable to recover it. 00:31:14.613 [2024-06-11 08:23:45.116013] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.613 [2024-06-11 08:23:45.116073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.613 [2024-06-11 08:23:45.116092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.613 [2024-06-11 08:23:45.116099] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.613 [2024-06-11 08:23:45.116106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.613 [2024-06-11 08:23:45.116122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.613 qpair failed and we were unable to recover it. 00:31:14.613 [2024-06-11 08:23:45.126045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.613 [2024-06-11 08:23:45.126112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.613 [2024-06-11 08:23:45.126132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.613 [2024-06-11 08:23:45.126141] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.613 [2024-06-11 08:23:45.126149] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.613 [2024-06-11 08:23:45.126165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.613 qpair failed and we were unable to recover it. 00:31:14.613 [2024-06-11 08:23:45.136076] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.613 [2024-06-11 08:23:45.136140] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.613 [2024-06-11 08:23:45.136160] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.613 [2024-06-11 08:23:45.136167] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.613 [2024-06-11 08:23:45.136173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.613 [2024-06-11 08:23:45.136190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.613 qpair failed and we were unable to recover it. 00:31:14.613 [2024-06-11 08:23:45.146097] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.613 [2024-06-11 08:23:45.146158] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.613 [2024-06-11 08:23:45.146177] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.613 [2024-06-11 08:23:45.146184] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.613 [2024-06-11 08:23:45.146190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.613 [2024-06-11 08:23:45.146206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.613 qpair failed and we were unable to recover it. 00:31:14.613 [2024-06-11 08:23:45.156179] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.613 [2024-06-11 08:23:45.156271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.613 [2024-06-11 08:23:45.156296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.613 [2024-06-11 08:23:45.156303] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.614 [2024-06-11 08:23:45.156309] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.614 [2024-06-11 08:23:45.156326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.614 qpair failed and we were unable to recover it. 00:31:14.614 [2024-06-11 08:23:45.166072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.614 [2024-06-11 08:23:45.166151] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.614 [2024-06-11 08:23:45.166171] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.614 [2024-06-11 08:23:45.166178] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.614 [2024-06-11 08:23:45.166184] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.614 [2024-06-11 08:23:45.166201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.614 qpair failed and we were unable to recover it. 00:31:14.614 [2024-06-11 08:23:45.176178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.614 [2024-06-11 08:23:45.176248] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.614 [2024-06-11 08:23:45.176267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.614 [2024-06-11 08:23:45.176274] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.614 [2024-06-11 08:23:45.176280] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.614 [2024-06-11 08:23:45.176296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.614 qpair failed and we were unable to recover it. 00:31:14.614 [2024-06-11 08:23:45.186215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.614 [2024-06-11 08:23:45.186277] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.614 [2024-06-11 08:23:45.186295] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.614 [2024-06-11 08:23:45.186302] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.614 [2024-06-11 08:23:45.186308] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.614 [2024-06-11 08:23:45.186324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.614 qpair failed and we were unable to recover it. 00:31:14.614 [2024-06-11 08:23:45.196272] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.614 [2024-06-11 08:23:45.196347] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.614 [2024-06-11 08:23:45.196366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.614 [2024-06-11 08:23:45.196373] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.614 [2024-06-11 08:23:45.196379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.614 [2024-06-11 08:23:45.196399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.614 qpair failed and we were unable to recover it. 00:31:14.614 [2024-06-11 08:23:45.206268] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.614 [2024-06-11 08:23:45.206337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.614 [2024-06-11 08:23:45.206357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.614 [2024-06-11 08:23:45.206364] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.614 [2024-06-11 08:23:45.206370] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.614 [2024-06-11 08:23:45.206386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.614 qpair failed and we were unable to recover it. 00:31:14.614 [2024-06-11 08:23:45.216300] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.614 [2024-06-11 08:23:45.216358] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.614 [2024-06-11 08:23:45.216376] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.614 [2024-06-11 08:23:45.216383] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.614 [2024-06-11 08:23:45.216389] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.614 [2024-06-11 08:23:45.216404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.614 qpair failed and we were unable to recover it. 00:31:14.614 [2024-06-11 08:23:45.226345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.614 [2024-06-11 08:23:45.226414] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.614 [2024-06-11 08:23:45.226432] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.614 [2024-06-11 08:23:45.226445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.614 [2024-06-11 08:23:45.226451] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.614 [2024-06-11 08:23:45.226467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.614 qpair failed and we were unable to recover it. 00:31:14.614 [2024-06-11 08:23:45.236388] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.614 [2024-06-11 08:23:45.236456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.614 [2024-06-11 08:23:45.236475] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.614 [2024-06-11 08:23:45.236482] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.614 [2024-06-11 08:23:45.236488] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.614 [2024-06-11 08:23:45.236504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.614 qpair failed and we were unable to recover it. 00:31:14.614 [2024-06-11 08:23:45.246290] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.614 [2024-06-11 08:23:45.246375] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.614 [2024-06-11 08:23:45.246399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.614 [2024-06-11 08:23:45.246406] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.614 [2024-06-11 08:23:45.246412] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.614 [2024-06-11 08:23:45.246435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.614 qpair failed and we were unable to recover it. 00:31:14.614 [2024-06-11 08:23:45.256435] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.614 [2024-06-11 08:23:45.256503] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.614 [2024-06-11 08:23:45.256522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.614 [2024-06-11 08:23:45.256529] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.614 [2024-06-11 08:23:45.256535] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.614 [2024-06-11 08:23:45.256551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.614 qpair failed and we were unable to recover it. 00:31:14.877 [2024-06-11 08:23:45.266475] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.877 [2024-06-11 08:23:45.266541] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.877 [2024-06-11 08:23:45.266559] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.877 [2024-06-11 08:23:45.266566] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.877 [2024-06-11 08:23:45.266572] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.877 [2024-06-11 08:23:45.266588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.877 qpair failed and we were unable to recover it. 00:31:14.877 [2024-06-11 08:23:45.276513] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.877 [2024-06-11 08:23:45.276575] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.877 [2024-06-11 08:23:45.276593] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.877 [2024-06-11 08:23:45.276600] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.877 [2024-06-11 08:23:45.276606] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.877 [2024-06-11 08:23:45.276621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.877 qpair failed and we were unable to recover it. 00:31:14.877 [2024-06-11 08:23:45.286518] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.877 [2024-06-11 08:23:45.286587] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.877 [2024-06-11 08:23:45.286606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.877 [2024-06-11 08:23:45.286614] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.877 [2024-06-11 08:23:45.286625] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.877 [2024-06-11 08:23:45.286641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.877 qpair failed and we were unable to recover it. 00:31:14.877 [2024-06-11 08:23:45.296416] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.877 [2024-06-11 08:23:45.296476] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.877 [2024-06-11 08:23:45.296496] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.877 [2024-06-11 08:23:45.296502] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.877 [2024-06-11 08:23:45.296508] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.877 [2024-06-11 08:23:45.296525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.877 qpair failed and we were unable to recover it. 00:31:14.877 [2024-06-11 08:23:45.306572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.877 [2024-06-11 08:23:45.306634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.877 [2024-06-11 08:23:45.306653] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.877 [2024-06-11 08:23:45.306660] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.877 [2024-06-11 08:23:45.306666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.877 [2024-06-11 08:23:45.306682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.877 qpair failed and we were unable to recover it. 00:31:14.877 [2024-06-11 08:23:45.316602] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.877 [2024-06-11 08:23:45.316660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.877 [2024-06-11 08:23:45.316679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.877 [2024-06-11 08:23:45.316686] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.877 [2024-06-11 08:23:45.316692] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.877 [2024-06-11 08:23:45.316708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.877 qpair failed and we were unable to recover it. 00:31:14.877 [2024-06-11 08:23:45.326641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.877 [2024-06-11 08:23:45.326716] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.877 [2024-06-11 08:23:45.326736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.877 [2024-06-11 08:23:45.326743] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.877 [2024-06-11 08:23:45.326750] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.877 [2024-06-11 08:23:45.326766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.877 qpair failed and we were unable to recover it. 00:31:14.877 [2024-06-11 08:23:45.336682] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.877 [2024-06-11 08:23:45.336753] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.877 [2024-06-11 08:23:45.336771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.877 [2024-06-11 08:23:45.336778] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.877 [2024-06-11 08:23:45.336785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.877 [2024-06-11 08:23:45.336800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.877 qpair failed and we were unable to recover it. 00:31:14.877 [2024-06-11 08:23:45.346581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.877 [2024-06-11 08:23:45.346642] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.877 [2024-06-11 08:23:45.346661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.878 [2024-06-11 08:23:45.346668] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.878 [2024-06-11 08:23:45.346674] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.878 [2024-06-11 08:23:45.346691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.878 qpair failed and we were unable to recover it. 00:31:14.878 [2024-06-11 08:23:45.356717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.878 [2024-06-11 08:23:45.356776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.878 [2024-06-11 08:23:45.356794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.878 [2024-06-11 08:23:45.356802] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.878 [2024-06-11 08:23:45.356808] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.878 [2024-06-11 08:23:45.356824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.878 qpair failed and we were unable to recover it. 00:31:14.878 [2024-06-11 08:23:45.366766] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.878 [2024-06-11 08:23:45.366833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.878 [2024-06-11 08:23:45.366851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.878 [2024-06-11 08:23:45.366859] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.878 [2024-06-11 08:23:45.366865] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.878 [2024-06-11 08:23:45.366880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.878 qpair failed and we were unable to recover it. 00:31:14.878 [2024-06-11 08:23:45.376792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.878 [2024-06-11 08:23:45.376848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.878 [2024-06-11 08:23:45.376867] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.878 [2024-06-11 08:23:45.376874] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.878 [2024-06-11 08:23:45.376885] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.878 [2024-06-11 08:23:45.376900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.878 qpair failed and we were unable to recover it. 00:31:14.878 [2024-06-11 08:23:45.386831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.878 [2024-06-11 08:23:45.386882] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.878 [2024-06-11 08:23:45.386900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.878 [2024-06-11 08:23:45.386907] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.878 [2024-06-11 08:23:45.386914] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.878 [2024-06-11 08:23:45.386929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.878 qpair failed and we were unable to recover it. 00:31:14.878 [2024-06-11 08:23:45.396882] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.878 [2024-06-11 08:23:45.396947] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.878 [2024-06-11 08:23:45.396964] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.878 [2024-06-11 08:23:45.396971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.878 [2024-06-11 08:23:45.396977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.878 [2024-06-11 08:23:45.396992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.878 qpair failed and we were unable to recover it. 00:31:14.878 [2024-06-11 08:23:45.406890] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.878 [2024-06-11 08:23:45.406971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.878 [2024-06-11 08:23:45.406988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.878 [2024-06-11 08:23:45.406995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.878 [2024-06-11 08:23:45.407001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.878 [2024-06-11 08:23:45.407017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.878 qpair failed and we were unable to recover it. 00:31:14.878 [2024-06-11 08:23:45.416791] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.878 [2024-06-11 08:23:45.416858] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.878 [2024-06-11 08:23:45.416876] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.878 [2024-06-11 08:23:45.416883] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.878 [2024-06-11 08:23:45.416889] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.878 [2024-06-11 08:23:45.416905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.878 qpair failed and we were unable to recover it. 00:31:14.878 [2024-06-11 08:23:45.426958] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.878 [2024-06-11 08:23:45.427024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.878 [2024-06-11 08:23:45.427042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.878 [2024-06-11 08:23:45.427050] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.878 [2024-06-11 08:23:45.427056] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.878 [2024-06-11 08:23:45.427071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.878 qpair failed and we were unable to recover it. 00:31:14.878 [2024-06-11 08:23:45.436872] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.878 [2024-06-11 08:23:45.436935] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.878 [2024-06-11 08:23:45.436957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.878 [2024-06-11 08:23:45.436968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.878 [2024-06-11 08:23:45.436975] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.878 [2024-06-11 08:23:45.436993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.878 qpair failed and we were unable to recover it. 00:31:14.878 [2024-06-11 08:23:45.447010] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.878 [2024-06-11 08:23:45.447090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.878 [2024-06-11 08:23:45.447110] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.878 [2024-06-11 08:23:45.447117] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.878 [2024-06-11 08:23:45.447123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.878 [2024-06-11 08:23:45.447140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.878 qpair failed and we were unable to recover it. 00:31:14.878 [2024-06-11 08:23:45.457107] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.878 [2024-06-11 08:23:45.457176] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.878 [2024-06-11 08:23:45.457195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.878 [2024-06-11 08:23:45.457202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.878 [2024-06-11 08:23:45.457208] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.878 [2024-06-11 08:23:45.457224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.878 qpair failed and we were unable to recover it. 00:31:14.878 [2024-06-11 08:23:45.466937] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.878 [2024-06-11 08:23:45.467004] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.878 [2024-06-11 08:23:45.467023] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.878 [2024-06-11 08:23:45.467036] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.878 [2024-06-11 08:23:45.467043] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.878 [2024-06-11 08:23:45.467060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.878 qpair failed and we were unable to recover it. 00:31:14.878 [2024-06-11 08:23:45.477117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.878 [2024-06-11 08:23:45.477181] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.878 [2024-06-11 08:23:45.477201] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.878 [2024-06-11 08:23:45.477208] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.878 [2024-06-11 08:23:45.477215] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.878 [2024-06-11 08:23:45.477231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.878 qpair failed and we were unable to recover it. 00:31:14.878 [2024-06-11 08:23:45.487143] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.879 [2024-06-11 08:23:45.487226] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.879 [2024-06-11 08:23:45.487247] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.879 [2024-06-11 08:23:45.487254] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.879 [2024-06-11 08:23:45.487262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.879 [2024-06-11 08:23:45.487278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.879 qpair failed and we were unable to recover it. 00:31:14.879 [2024-06-11 08:23:45.497193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.879 [2024-06-11 08:23:45.497263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.879 [2024-06-11 08:23:45.497282] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.879 [2024-06-11 08:23:45.497289] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.879 [2024-06-11 08:23:45.497296] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.879 [2024-06-11 08:23:45.497312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.879 qpair failed and we were unable to recover it. 00:31:14.879 [2024-06-11 08:23:45.507202] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.879 [2024-06-11 08:23:45.507270] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.879 [2024-06-11 08:23:45.507289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.879 [2024-06-11 08:23:45.507296] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.879 [2024-06-11 08:23:45.507313] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.879 [2024-06-11 08:23:45.507333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.879 qpair failed and we were unable to recover it. 00:31:14.879 [2024-06-11 08:23:45.517244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:14.879 [2024-06-11 08:23:45.517308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:14.879 [2024-06-11 08:23:45.517327] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:14.879 [2024-06-11 08:23:45.517334] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:14.879 [2024-06-11 08:23:45.517340] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:14.879 [2024-06-11 08:23:45.517357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:14.879 qpair failed and we were unable to recover it. 00:31:15.142 [2024-06-11 08:23:45.527265] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.142 [2024-06-11 08:23:45.527332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.142 [2024-06-11 08:23:45.527351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.142 [2024-06-11 08:23:45.527358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.142 [2024-06-11 08:23:45.527365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.142 [2024-06-11 08:23:45.527382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.142 qpair failed and we were unable to recover it. 00:31:15.142 [2024-06-11 08:23:45.537341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.142 [2024-06-11 08:23:45.537452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.142 [2024-06-11 08:23:45.537471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.142 [2024-06-11 08:23:45.537478] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.142 [2024-06-11 08:23:45.537485] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.142 [2024-06-11 08:23:45.537502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.142 qpair failed and we were unable to recover it. 00:31:15.142 [2024-06-11 08:23:45.547242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.142 [2024-06-11 08:23:45.547309] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.142 [2024-06-11 08:23:45.547327] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.142 [2024-06-11 08:23:45.547334] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.142 [2024-06-11 08:23:45.547340] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.142 [2024-06-11 08:23:45.547356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.142 qpair failed and we were unable to recover it. 00:31:15.142 [2024-06-11 08:23:45.557353] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.142 [2024-06-11 08:23:45.557413] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.142 [2024-06-11 08:23:45.557431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.142 [2024-06-11 08:23:45.557453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.142 [2024-06-11 08:23:45.557459] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.142 [2024-06-11 08:23:45.557475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.142 qpair failed and we were unable to recover it. 00:31:15.142 [2024-06-11 08:23:45.567400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.142 [2024-06-11 08:23:45.567484] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.142 [2024-06-11 08:23:45.567502] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.142 [2024-06-11 08:23:45.567509] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.142 [2024-06-11 08:23:45.567515] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.142 [2024-06-11 08:23:45.567531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.142 qpair failed and we were unable to recover it. 00:31:15.142 [2024-06-11 08:23:45.577426] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.142 [2024-06-11 08:23:45.577489] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.142 [2024-06-11 08:23:45.577507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.142 [2024-06-11 08:23:45.577514] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.142 [2024-06-11 08:23:45.577520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.142 [2024-06-11 08:23:45.577537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.142 qpair failed and we were unable to recover it. 00:31:15.142 [2024-06-11 08:23:45.587488] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.142 [2024-06-11 08:23:45.587548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.142 [2024-06-11 08:23:45.587567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.142 [2024-06-11 08:23:45.587574] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.142 [2024-06-11 08:23:45.587580] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.142 [2024-06-11 08:23:45.587596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.142 qpair failed and we were unable to recover it. 00:31:15.142 [2024-06-11 08:23:45.597494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.142 [2024-06-11 08:23:45.597561] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.142 [2024-06-11 08:23:45.597582] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.142 [2024-06-11 08:23:45.597591] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.142 [2024-06-11 08:23:45.597598] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.142 [2024-06-11 08:23:45.597616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.142 qpair failed and we were unable to recover it. 00:31:15.142 [2024-06-11 08:23:45.607527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.142 [2024-06-11 08:23:45.607598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.142 [2024-06-11 08:23:45.607618] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.142 [2024-06-11 08:23:45.607625] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.142 [2024-06-11 08:23:45.607631] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.142 [2024-06-11 08:23:45.607647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.142 qpair failed and we were unable to recover it. 00:31:15.142 [2024-06-11 08:23:45.617535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.142 [2024-06-11 08:23:45.617605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.142 [2024-06-11 08:23:45.617624] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.142 [2024-06-11 08:23:45.617631] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.142 [2024-06-11 08:23:45.617638] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.142 [2024-06-11 08:23:45.617654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.142 qpair failed and we were unable to recover it. 00:31:15.142 [2024-06-11 08:23:45.627599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.142 [2024-06-11 08:23:45.627675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.142 [2024-06-11 08:23:45.627694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.142 [2024-06-11 08:23:45.627701] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.142 [2024-06-11 08:23:45.627707] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.142 [2024-06-11 08:23:45.627723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.142 qpair failed and we were unable to recover it. 00:31:15.142 [2024-06-11 08:23:45.637633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.142 [2024-06-11 08:23:45.637692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.142 [2024-06-11 08:23:45.637710] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.142 [2024-06-11 08:23:45.637718] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.142 [2024-06-11 08:23:45.637724] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.142 [2024-06-11 08:23:45.637740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.142 qpair failed and we were unable to recover it. 00:31:15.142 [2024-06-11 08:23:45.647656] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.143 [2024-06-11 08:23:45.647780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.143 [2024-06-11 08:23:45.647804] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.143 [2024-06-11 08:23:45.647813] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.143 [2024-06-11 08:23:45.647819] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.143 [2024-06-11 08:23:45.647835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.143 qpair failed and we were unable to recover it. 00:31:15.143 [2024-06-11 08:23:45.657674] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.143 [2024-06-11 08:23:45.657735] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.143 [2024-06-11 08:23:45.657755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.143 [2024-06-11 08:23:45.657762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.143 [2024-06-11 08:23:45.657768] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.143 [2024-06-11 08:23:45.657784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.143 qpair failed and we were unable to recover it. 00:31:15.143 [2024-06-11 08:23:45.667719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.143 [2024-06-11 08:23:45.667782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.143 [2024-06-11 08:23:45.667800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.143 [2024-06-11 08:23:45.667807] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.143 [2024-06-11 08:23:45.667814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.143 [2024-06-11 08:23:45.667830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.143 qpair failed and we were unable to recover it. 00:31:15.143 [2024-06-11 08:23:45.677753] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.143 [2024-06-11 08:23:45.677821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.143 [2024-06-11 08:23:45.677839] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.143 [2024-06-11 08:23:45.677846] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.143 [2024-06-11 08:23:45.677852] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.143 [2024-06-11 08:23:45.677868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.143 qpair failed and we were unable to recover it. 00:31:15.143 [2024-06-11 08:23:45.687643] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.143 [2024-06-11 08:23:45.687725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.143 [2024-06-11 08:23:45.687744] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.143 [2024-06-11 08:23:45.687751] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.143 [2024-06-11 08:23:45.687757] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.143 [2024-06-11 08:23:45.687778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.143 qpair failed and we were unable to recover it. 00:31:15.143 [2024-06-11 08:23:45.697760] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.143 [2024-06-11 08:23:45.697828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.143 [2024-06-11 08:23:45.697848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.143 [2024-06-11 08:23:45.697855] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.143 [2024-06-11 08:23:45.697861] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.143 [2024-06-11 08:23:45.697877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.143 qpair failed and we were unable to recover it. 00:31:15.143 [2024-06-11 08:23:45.707792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.143 [2024-06-11 08:23:45.707864] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.143 [2024-06-11 08:23:45.707883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.143 [2024-06-11 08:23:45.707890] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.143 [2024-06-11 08:23:45.707896] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.143 [2024-06-11 08:23:45.707912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.143 qpair failed and we were unable to recover it. 00:31:15.143 [2024-06-11 08:23:45.717880] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.143 [2024-06-11 08:23:45.717952] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.143 [2024-06-11 08:23:45.717971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.143 [2024-06-11 08:23:45.717978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.143 [2024-06-11 08:23:45.717984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.143 [2024-06-11 08:23:45.718000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.143 qpair failed and we were unable to recover it. 00:31:15.143 [2024-06-11 08:23:45.727779] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.143 [2024-06-11 08:23:45.727851] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.143 [2024-06-11 08:23:45.727870] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.143 [2024-06-11 08:23:45.727877] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.143 [2024-06-11 08:23:45.727883] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.143 [2024-06-11 08:23:45.727899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.143 qpair failed and we were unable to recover it. 00:31:15.143 [2024-06-11 08:23:45.737913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.143 [2024-06-11 08:23:45.738031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.143 [2024-06-11 08:23:45.738058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.143 [2024-06-11 08:23:45.738066] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.143 [2024-06-11 08:23:45.738072] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.143 [2024-06-11 08:23:45.738089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.143 qpair failed and we were unable to recover it. 00:31:15.143 [2024-06-11 08:23:45.747959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.143 [2024-06-11 08:23:45.748021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.143 [2024-06-11 08:23:45.748040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.143 [2024-06-11 08:23:45.748047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.143 [2024-06-11 08:23:45.748054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.143 [2024-06-11 08:23:45.748069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.143 qpair failed and we were unable to recover it. 00:31:15.143 [2024-06-11 08:23:45.757915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.143 [2024-06-11 08:23:45.757983] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.143 [2024-06-11 08:23:45.758001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.143 [2024-06-11 08:23:45.758008] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.143 [2024-06-11 08:23:45.758014] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.143 [2024-06-11 08:23:45.758030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.143 qpair failed and we were unable to recover it. 00:31:15.143 [2024-06-11 08:23:45.768006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.143 [2024-06-11 08:23:45.768090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.143 [2024-06-11 08:23:45.768108] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.143 [2024-06-11 08:23:45.768115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.143 [2024-06-11 08:23:45.768121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.143 [2024-06-11 08:23:45.768137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.143 qpair failed and we were unable to recover it. 00:31:15.143 [2024-06-11 08:23:45.777940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.143 [2024-06-11 08:23:45.778002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.143 [2024-06-11 08:23:45.778020] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.143 [2024-06-11 08:23:45.778027] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.144 [2024-06-11 08:23:45.778038] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.144 [2024-06-11 08:23:45.778055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.144 qpair failed and we were unable to recover it. 00:31:15.408 [2024-06-11 08:23:45.788093] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.408 [2024-06-11 08:23:45.788164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.408 [2024-06-11 08:23:45.788183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.408 [2024-06-11 08:23:45.788190] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.408 [2024-06-11 08:23:45.788196] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.408 [2024-06-11 08:23:45.788212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.408 qpair failed and we were unable to recover it. 00:31:15.408 [2024-06-11 08:23:45.798122] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.408 [2024-06-11 08:23:45.798196] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.408 [2024-06-11 08:23:45.798215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.408 [2024-06-11 08:23:45.798222] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.408 [2024-06-11 08:23:45.798228] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.408 [2024-06-11 08:23:45.798244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.408 qpair failed and we were unable to recover it. 00:31:15.408 [2024-06-11 08:23:45.808147] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.408 [2024-06-11 08:23:45.808224] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.408 [2024-06-11 08:23:45.808242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.408 [2024-06-11 08:23:45.808249] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.408 [2024-06-11 08:23:45.808255] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.408 [2024-06-11 08:23:45.808271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.408 qpair failed and we were unable to recover it. 00:31:15.408 [2024-06-11 08:23:45.818151] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.408 [2024-06-11 08:23:45.818215] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.408 [2024-06-11 08:23:45.818233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.408 [2024-06-11 08:23:45.818241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.408 [2024-06-11 08:23:45.818247] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.408 [2024-06-11 08:23:45.818262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.409 qpair failed and we were unable to recover it. 00:31:15.409 [2024-06-11 08:23:45.828201] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.409 [2024-06-11 08:23:45.828298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.409 [2024-06-11 08:23:45.828322] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.409 [2024-06-11 08:23:45.828331] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.409 [2024-06-11 08:23:45.828337] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.409 [2024-06-11 08:23:45.828354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.409 qpair failed and we were unable to recover it. 00:31:15.409 [2024-06-11 08:23:45.838235] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.409 [2024-06-11 08:23:45.838319] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.409 [2024-06-11 08:23:45.838340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.409 [2024-06-11 08:23:45.838347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.409 [2024-06-11 08:23:45.838354] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.409 [2024-06-11 08:23:45.838370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.409 qpair failed and we were unable to recover it. 00:31:15.409 [2024-06-11 08:23:45.848318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.409 [2024-06-11 08:23:45.848450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.409 [2024-06-11 08:23:45.848470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.409 [2024-06-11 08:23:45.848477] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.409 [2024-06-11 08:23:45.848483] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.409 [2024-06-11 08:23:45.848499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.409 qpair failed and we were unable to recover it. 00:31:15.409 [2024-06-11 08:23:45.858161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.409 [2024-06-11 08:23:45.858221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.409 [2024-06-11 08:23:45.858240] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.409 [2024-06-11 08:23:45.858247] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.409 [2024-06-11 08:23:45.858254] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.409 [2024-06-11 08:23:45.858270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.409 qpair failed and we were unable to recover it. 00:31:15.409 [2024-06-11 08:23:45.868311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.409 [2024-06-11 08:23:45.868380] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.409 [2024-06-11 08:23:45.868399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.409 [2024-06-11 08:23:45.868407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.409 [2024-06-11 08:23:45.868418] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.409 [2024-06-11 08:23:45.868434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.409 qpair failed and we were unable to recover it. 00:31:15.409 [2024-06-11 08:23:45.878409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.409 [2024-06-11 08:23:45.878501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.409 [2024-06-11 08:23:45.878520] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.409 [2024-06-11 08:23:45.878527] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.409 [2024-06-11 08:23:45.878533] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.409 [2024-06-11 08:23:45.878550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.409 qpair failed and we were unable to recover it. 00:31:15.409 [2024-06-11 08:23:45.888404] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.409 [2024-06-11 08:23:45.888510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.409 [2024-06-11 08:23:45.888529] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.409 [2024-06-11 08:23:45.888536] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.409 [2024-06-11 08:23:45.888543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.409 [2024-06-11 08:23:45.888560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.409 qpair failed and we were unable to recover it. 00:31:15.409 [2024-06-11 08:23:45.898411] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.409 [2024-06-11 08:23:45.898484] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.409 [2024-06-11 08:23:45.898502] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.409 [2024-06-11 08:23:45.898510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.409 [2024-06-11 08:23:45.898517] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.409 [2024-06-11 08:23:45.898533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.409 qpair failed and we were unable to recover it. 00:31:15.409 [2024-06-11 08:23:45.908466] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.409 [2024-06-11 08:23:45.908524] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.409 [2024-06-11 08:23:45.908542] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.409 [2024-06-11 08:23:45.908548] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.409 [2024-06-11 08:23:45.908554] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.409 [2024-06-11 08:23:45.908571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.409 qpair failed and we were unable to recover it. 00:31:15.409 [2024-06-11 08:23:45.918506] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.409 [2024-06-11 08:23:45.918579] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.409 [2024-06-11 08:23:45.918598] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.409 [2024-06-11 08:23:45.918605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.409 [2024-06-11 08:23:45.918611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.409 [2024-06-11 08:23:45.918627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.409 qpair failed and we were unable to recover it. 00:31:15.409 [2024-06-11 08:23:45.928519] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.409 [2024-06-11 08:23:45.928595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.409 [2024-06-11 08:23:45.928614] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.409 [2024-06-11 08:23:45.928621] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.409 [2024-06-11 08:23:45.928627] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.409 [2024-06-11 08:23:45.928643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.409 qpair failed and we were unable to recover it. 00:31:15.409 [2024-06-11 08:23:45.938511] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.409 [2024-06-11 08:23:45.938580] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.409 [2024-06-11 08:23:45.938599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.409 [2024-06-11 08:23:45.938606] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.409 [2024-06-11 08:23:45.938612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.409 [2024-06-11 08:23:45.938628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.409 qpair failed and we were unable to recover it. 00:31:15.409 [2024-06-11 08:23:45.948459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.409 [2024-06-11 08:23:45.948531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.409 [2024-06-11 08:23:45.948549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.409 [2024-06-11 08:23:45.948556] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.409 [2024-06-11 08:23:45.948563] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.409 [2024-06-11 08:23:45.948580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.409 qpair failed and we were unable to recover it. 00:31:15.409 [2024-06-11 08:23:45.958601] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.409 [2024-06-11 08:23:45.958666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.409 [2024-06-11 08:23:45.958685] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.410 [2024-06-11 08:23:45.958698] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.410 [2024-06-11 08:23:45.958705] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.410 [2024-06-11 08:23:45.958722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.410 qpair failed and we were unable to recover it. 00:31:15.410 [2024-06-11 08:23:45.968639] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.410 [2024-06-11 08:23:45.968706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.410 [2024-06-11 08:23:45.968724] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.410 [2024-06-11 08:23:45.968732] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.410 [2024-06-11 08:23:45.968740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.410 [2024-06-11 08:23:45.968756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.410 qpair failed and we were unable to recover it. 00:31:15.410 [2024-06-11 08:23:45.978651] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.410 [2024-06-11 08:23:45.978723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.410 [2024-06-11 08:23:45.978742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.410 [2024-06-11 08:23:45.978749] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.410 [2024-06-11 08:23:45.978756] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.410 [2024-06-11 08:23:45.978773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.410 qpair failed and we were unable to recover it. 00:31:15.410 [2024-06-11 08:23:45.988584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.410 [2024-06-11 08:23:45.988655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.410 [2024-06-11 08:23:45.988673] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.410 [2024-06-11 08:23:45.988680] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.410 [2024-06-11 08:23:45.988686] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.410 [2024-06-11 08:23:45.988702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.410 qpair failed and we were unable to recover it. 00:31:15.410 [2024-06-11 08:23:45.998740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.410 [2024-06-11 08:23:45.998801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.410 [2024-06-11 08:23:45.998819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.410 [2024-06-11 08:23:45.998826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.410 [2024-06-11 08:23:45.998832] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.410 [2024-06-11 08:23:45.998848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.410 qpair failed and we were unable to recover it. 00:31:15.410 [2024-06-11 08:23:46.008813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.410 [2024-06-11 08:23:46.008906] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.410 [2024-06-11 08:23:46.008925] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.410 [2024-06-11 08:23:46.008932] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.410 [2024-06-11 08:23:46.008939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.410 [2024-06-11 08:23:46.008956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.410 qpair failed and we were unable to recover it. 00:31:15.410 [2024-06-11 08:23:46.018770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.410 [2024-06-11 08:23:46.018834] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.410 [2024-06-11 08:23:46.018853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.410 [2024-06-11 08:23:46.018860] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.410 [2024-06-11 08:23:46.018866] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.410 [2024-06-11 08:23:46.018882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.410 qpair failed and we were unable to recover it. 00:31:15.410 [2024-06-11 08:23:46.028691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.410 [2024-06-11 08:23:46.028770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.410 [2024-06-11 08:23:46.028789] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.410 [2024-06-11 08:23:46.028796] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.410 [2024-06-11 08:23:46.028802] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.410 [2024-06-11 08:23:46.028819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.410 qpair failed and we were unable to recover it. 00:31:15.410 [2024-06-11 08:23:46.038725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.410 [2024-06-11 08:23:46.038788] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.410 [2024-06-11 08:23:46.038807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.410 [2024-06-11 08:23:46.038814] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.410 [2024-06-11 08:23:46.038820] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.410 [2024-06-11 08:23:46.038835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.410 qpair failed and we were unable to recover it. 00:31:15.410 [2024-06-11 08:23:46.048892] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.410 [2024-06-11 08:23:46.048965] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.410 [2024-06-11 08:23:46.048983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.410 [2024-06-11 08:23:46.048995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.410 [2024-06-11 08:23:46.049002] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.410 [2024-06-11 08:23:46.049017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.410 qpair failed and we were unable to recover it. 00:31:15.673 [2024-06-11 08:23:46.058894] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.673 [2024-06-11 08:23:46.058961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.673 [2024-06-11 08:23:46.058980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.673 [2024-06-11 08:23:46.058987] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.673 [2024-06-11 08:23:46.058993] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.673 [2024-06-11 08:23:46.059010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.673 qpair failed and we were unable to recover it. 00:31:15.673 [2024-06-11 08:23:46.068955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.673 [2024-06-11 08:23:46.069028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.673 [2024-06-11 08:23:46.069046] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.673 [2024-06-11 08:23:46.069053] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.673 [2024-06-11 08:23:46.069059] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.673 [2024-06-11 08:23:46.069076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.673 qpair failed and we were unable to recover it. 00:31:15.673 [2024-06-11 08:23:46.078963] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.673 [2024-06-11 08:23:46.079024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.673 [2024-06-11 08:23:46.079041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.673 [2024-06-11 08:23:46.079048] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.673 [2024-06-11 08:23:46.079054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.673 [2024-06-11 08:23:46.079070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.673 qpair failed and we were unable to recover it. 00:31:15.673 [2024-06-11 08:23:46.088856] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.673 [2024-06-11 08:23:46.088930] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.673 [2024-06-11 08:23:46.088949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.673 [2024-06-11 08:23:46.088956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.673 [2024-06-11 08:23:46.088962] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.673 [2024-06-11 08:23:46.088980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.673 qpair failed and we were unable to recover it. 00:31:15.673 [2024-06-11 08:23:46.098896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.673 [2024-06-11 08:23:46.098986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.673 [2024-06-11 08:23:46.099006] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.673 [2024-06-11 08:23:46.099013] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.673 [2024-06-11 08:23:46.099019] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.673 [2024-06-11 08:23:46.099035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.673 qpair failed and we were unable to recover it. 00:31:15.673 [2024-06-11 08:23:46.108950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.673 [2024-06-11 08:23:46.109028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.673 [2024-06-11 08:23:46.109046] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.673 [2024-06-11 08:23:46.109054] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.673 [2024-06-11 08:23:46.109060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.673 [2024-06-11 08:23:46.109076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.673 qpair failed and we were unable to recover it. 00:31:15.673 [2024-06-11 08:23:46.119096] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.673 [2024-06-11 08:23:46.119153] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.673 [2024-06-11 08:23:46.119171] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.673 [2024-06-11 08:23:46.119178] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.673 [2024-06-11 08:23:46.119184] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.673 [2024-06-11 08:23:46.119201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.673 qpair failed and we were unable to recover it. 00:31:15.673 [2024-06-11 08:23:46.129111] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.673 [2024-06-11 08:23:46.129186] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.673 [2024-06-11 08:23:46.129205] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.673 [2024-06-11 08:23:46.129211] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.673 [2024-06-11 08:23:46.129217] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.673 [2024-06-11 08:23:46.129233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.673 qpair failed and we were unable to recover it. 00:31:15.673 [2024-06-11 08:23:46.139131] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.673 [2024-06-11 08:23:46.139228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.673 [2024-06-11 08:23:46.139252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.673 [2024-06-11 08:23:46.139259] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.673 [2024-06-11 08:23:46.139265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.673 [2024-06-11 08:23:46.139283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.673 qpair failed and we were unable to recover it. 00:31:15.673 [2024-06-11 08:23:46.149174] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.673 [2024-06-11 08:23:46.149233] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.673 [2024-06-11 08:23:46.149251] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.674 [2024-06-11 08:23:46.149258] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.674 [2024-06-11 08:23:46.149264] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.674 [2024-06-11 08:23:46.149280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.674 qpair failed and we were unable to recover it. 00:31:15.674 [2024-06-11 08:23:46.159097] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.674 [2024-06-11 08:23:46.159158] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.674 [2024-06-11 08:23:46.159176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.674 [2024-06-11 08:23:46.159183] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.674 [2024-06-11 08:23:46.159189] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.674 [2024-06-11 08:23:46.159212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.674 qpair failed and we were unable to recover it. 00:31:15.674 [2024-06-11 08:23:46.169264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.674 [2024-06-11 08:23:46.169339] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.674 [2024-06-11 08:23:46.169358] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.674 [2024-06-11 08:23:46.169366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.674 [2024-06-11 08:23:46.169372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.674 [2024-06-11 08:23:46.169389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.674 qpair failed and we were unable to recover it. 00:31:15.674 [2024-06-11 08:23:46.179237] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.674 [2024-06-11 08:23:46.179305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.674 [2024-06-11 08:23:46.179324] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.674 [2024-06-11 08:23:46.179331] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.674 [2024-06-11 08:23:46.179338] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.674 [2024-06-11 08:23:46.179359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.674 qpair failed and we were unable to recover it. 00:31:15.674 [2024-06-11 08:23:46.189195] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.674 [2024-06-11 08:23:46.189268] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.674 [2024-06-11 08:23:46.189288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.674 [2024-06-11 08:23:46.189295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.674 [2024-06-11 08:23:46.189301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.674 [2024-06-11 08:23:46.189317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.674 qpair failed and we were unable to recover it. 00:31:15.674 [2024-06-11 08:23:46.199197] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.674 [2024-06-11 08:23:46.199258] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.674 [2024-06-11 08:23:46.199276] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.674 [2024-06-11 08:23:46.199283] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.674 [2024-06-11 08:23:46.199289] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.674 [2024-06-11 08:23:46.199305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.674 qpair failed and we were unable to recover it. 00:31:15.674 [2024-06-11 08:23:46.209350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.674 [2024-06-11 08:23:46.209422] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.674 [2024-06-11 08:23:46.209447] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.674 [2024-06-11 08:23:46.209455] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.674 [2024-06-11 08:23:46.209461] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.674 [2024-06-11 08:23:46.209477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.674 qpair failed and we were unable to recover it. 00:31:15.674 [2024-06-11 08:23:46.219376] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.674 [2024-06-11 08:23:46.219435] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.674 [2024-06-11 08:23:46.219461] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.674 [2024-06-11 08:23:46.219468] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.674 [2024-06-11 08:23:46.219474] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.674 [2024-06-11 08:23:46.219490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.674 qpair failed and we were unable to recover it. 00:31:15.674 [2024-06-11 08:23:46.229385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.674 [2024-06-11 08:23:46.229455] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.674 [2024-06-11 08:23:46.229479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.674 [2024-06-11 08:23:46.229486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.674 [2024-06-11 08:23:46.229492] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.674 [2024-06-11 08:23:46.229508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.674 qpair failed and we were unable to recover it. 00:31:15.674 [2024-06-11 08:23:46.239334] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.674 [2024-06-11 08:23:46.239417] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.674 [2024-06-11 08:23:46.239447] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.674 [2024-06-11 08:23:46.239455] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.674 [2024-06-11 08:23:46.239463] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.674 [2024-06-11 08:23:46.239487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.674 qpair failed and we were unable to recover it. 00:31:15.674 [2024-06-11 08:23:46.249480] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.674 [2024-06-11 08:23:46.249550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.674 [2024-06-11 08:23:46.249569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.674 [2024-06-11 08:23:46.249576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.674 [2024-06-11 08:23:46.249582] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.674 [2024-06-11 08:23:46.249598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.674 qpair failed and we were unable to recover it. 00:31:15.674 [2024-06-11 08:23:46.259502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.674 [2024-06-11 08:23:46.259556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.674 [2024-06-11 08:23:46.259574] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.674 [2024-06-11 08:23:46.259581] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.674 [2024-06-11 08:23:46.259587] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.674 [2024-06-11 08:23:46.259603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.674 qpair failed and we were unable to recover it. 00:31:15.674 [2024-06-11 08:23:46.269530] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.674 [2024-06-11 08:23:46.269584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.674 [2024-06-11 08:23:46.269603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.674 [2024-06-11 08:23:46.269610] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.674 [2024-06-11 08:23:46.269616] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.674 [2024-06-11 08:23:46.269637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.674 qpair failed and we were unable to recover it. 00:31:15.674 [2024-06-11 08:23:46.279469] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.674 [2024-06-11 08:23:46.279551] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.674 [2024-06-11 08:23:46.279569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.674 [2024-06-11 08:23:46.279576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.674 [2024-06-11 08:23:46.279582] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.674 [2024-06-11 08:23:46.279598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.674 qpair failed and we were unable to recover it. 00:31:15.674 [2024-06-11 08:23:46.289564] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.675 [2024-06-11 08:23:46.289660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.675 [2024-06-11 08:23:46.289678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.675 [2024-06-11 08:23:46.289685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.675 [2024-06-11 08:23:46.289691] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.675 [2024-06-11 08:23:46.289707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.675 qpair failed and we were unable to recover it. 00:31:15.675 [2024-06-11 08:23:46.299601] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.675 [2024-06-11 08:23:46.299681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.675 [2024-06-11 08:23:46.299701] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.675 [2024-06-11 08:23:46.299709] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.675 [2024-06-11 08:23:46.299718] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.675 [2024-06-11 08:23:46.299735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.675 qpair failed and we were unable to recover it. 00:31:15.675 [2024-06-11 08:23:46.309653] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.675 [2024-06-11 08:23:46.309715] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.675 [2024-06-11 08:23:46.309734] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.675 [2024-06-11 08:23:46.309741] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.675 [2024-06-11 08:23:46.309747] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.675 [2024-06-11 08:23:46.309763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.675 qpair failed and we were unable to recover it. 00:31:15.938 [2024-06-11 08:23:46.319701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.938 [2024-06-11 08:23:46.319764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.938 [2024-06-11 08:23:46.319788] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.938 [2024-06-11 08:23:46.319795] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.938 [2024-06-11 08:23:46.319801] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.938 [2024-06-11 08:23:46.319817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.938 qpair failed and we were unable to recover it. 00:31:15.938 [2024-06-11 08:23:46.329767] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.938 [2024-06-11 08:23:46.329839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.938 [2024-06-11 08:23:46.329857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.938 [2024-06-11 08:23:46.329865] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.938 [2024-06-11 08:23:46.329870] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.938 [2024-06-11 08:23:46.329886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.938 qpair failed and we were unable to recover it. 00:31:15.938 [2024-06-11 08:23:46.339759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.938 [2024-06-11 08:23:46.339830] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.938 [2024-06-11 08:23:46.339848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.938 [2024-06-11 08:23:46.339855] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.938 [2024-06-11 08:23:46.339861] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.938 [2024-06-11 08:23:46.339876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.938 qpair failed and we were unable to recover it. 00:31:15.938 [2024-06-11 08:23:46.349753] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.938 [2024-06-11 08:23:46.349819] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.938 [2024-06-11 08:23:46.349837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.938 [2024-06-11 08:23:46.349844] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.938 [2024-06-11 08:23:46.349850] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.938 [2024-06-11 08:23:46.349865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.938 qpair failed and we were unable to recover it. 00:31:15.938 [2024-06-11 08:23:46.359864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.938 [2024-06-11 08:23:46.359930] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.938 [2024-06-11 08:23:46.359948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.938 [2024-06-11 08:23:46.359955] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.938 [2024-06-11 08:23:46.359967] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.938 [2024-06-11 08:23:46.359982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.938 qpair failed and we were unable to recover it. 00:31:15.938 [2024-06-11 08:23:46.369839] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.938 [2024-06-11 08:23:46.369911] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.938 [2024-06-11 08:23:46.369929] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.938 [2024-06-11 08:23:46.369936] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.938 [2024-06-11 08:23:46.369942] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.938 [2024-06-11 08:23:46.369957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.938 qpair failed and we were unable to recover it. 00:31:15.938 [2024-06-11 08:23:46.379873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.938 [2024-06-11 08:23:46.379935] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.939 [2024-06-11 08:23:46.379953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.939 [2024-06-11 08:23:46.379960] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.939 [2024-06-11 08:23:46.379966] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.939 [2024-06-11 08:23:46.379982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.939 qpair failed and we were unable to recover it. 00:31:15.939 [2024-06-11 08:23:46.389901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.939 [2024-06-11 08:23:46.389957] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.939 [2024-06-11 08:23:46.389975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.939 [2024-06-11 08:23:46.389982] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.939 [2024-06-11 08:23:46.389988] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.939 [2024-06-11 08:23:46.390004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.939 qpair failed and we were unable to recover it. 00:31:15.939 [2024-06-11 08:23:46.399844] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.939 [2024-06-11 08:23:46.399919] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.939 [2024-06-11 08:23:46.399940] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.939 [2024-06-11 08:23:46.399947] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.939 [2024-06-11 08:23:46.399954] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.939 [2024-06-11 08:23:46.399970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.939 qpair failed and we were unable to recover it. 00:31:15.939 [2024-06-11 08:23:46.409977] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.939 [2024-06-11 08:23:46.410061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.939 [2024-06-11 08:23:46.410081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.939 [2024-06-11 08:23:46.410088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.939 [2024-06-11 08:23:46.410094] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.939 [2024-06-11 08:23:46.410110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.939 qpair failed and we were unable to recover it. 00:31:15.939 [2024-06-11 08:23:46.419976] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.939 [2024-06-11 08:23:46.420040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.939 [2024-06-11 08:23:46.420058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.939 [2024-06-11 08:23:46.420065] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.939 [2024-06-11 08:23:46.420071] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.939 [2024-06-11 08:23:46.420087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.939 qpair failed and we were unable to recover it. 00:31:15.939 [2024-06-11 08:23:46.430006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.939 [2024-06-11 08:23:46.430060] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.939 [2024-06-11 08:23:46.430079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.939 [2024-06-11 08:23:46.430086] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.939 [2024-06-11 08:23:46.430092] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.939 [2024-06-11 08:23:46.430108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.939 qpair failed and we were unable to recover it. 00:31:15.939 [2024-06-11 08:23:46.439954] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.939 [2024-06-11 08:23:46.440013] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.939 [2024-06-11 08:23:46.440031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.939 [2024-06-11 08:23:46.440038] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.939 [2024-06-11 08:23:46.440044] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.939 [2024-06-11 08:23:46.440060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.939 qpair failed and we were unable to recover it. 00:31:15.939 [2024-06-11 08:23:46.450089] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.939 [2024-06-11 08:23:46.450165] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.939 [2024-06-11 08:23:46.450198] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.939 [2024-06-11 08:23:46.450213] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.939 [2024-06-11 08:23:46.450220] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.939 [2024-06-11 08:23:46.450242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.939 qpair failed and we were unable to recover it. 00:31:15.939 [2024-06-11 08:23:46.460110] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.939 [2024-06-11 08:23:46.460182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.939 [2024-06-11 08:23:46.460215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.939 [2024-06-11 08:23:46.460224] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.939 [2024-06-11 08:23:46.460230] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.939 [2024-06-11 08:23:46.460252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.939 qpair failed and we were unable to recover it. 00:31:15.939 [2024-06-11 08:23:46.470173] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.939 [2024-06-11 08:23:46.470250] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.939 [2024-06-11 08:23:46.470283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.939 [2024-06-11 08:23:46.470292] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.939 [2024-06-11 08:23:46.470298] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.939 [2024-06-11 08:23:46.470319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.939 qpair failed and we were unable to recover it. 00:31:15.939 [2024-06-11 08:23:46.480083] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.939 [2024-06-11 08:23:46.480145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.939 [2024-06-11 08:23:46.480167] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.939 [2024-06-11 08:23:46.480174] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.939 [2024-06-11 08:23:46.480180] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.939 [2024-06-11 08:23:46.480197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.939 qpair failed and we were unable to recover it. 00:31:15.939 [2024-06-11 08:23:46.490209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.939 [2024-06-11 08:23:46.490322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.939 [2024-06-11 08:23:46.490341] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.939 [2024-06-11 08:23:46.490349] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.939 [2024-06-11 08:23:46.490355] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.939 [2024-06-11 08:23:46.490371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.939 qpair failed and we were unable to recover it. 00:31:15.939 [2024-06-11 08:23:46.500237] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.939 [2024-06-11 08:23:46.500295] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.939 [2024-06-11 08:23:46.500314] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.939 [2024-06-11 08:23:46.500321] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.939 [2024-06-11 08:23:46.500327] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.939 [2024-06-11 08:23:46.500343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.939 qpair failed and we were unable to recover it. 00:31:15.939 [2024-06-11 08:23:46.510289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.939 [2024-06-11 08:23:46.510346] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.939 [2024-06-11 08:23:46.510365] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.939 [2024-06-11 08:23:46.510372] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.939 [2024-06-11 08:23:46.510378] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.939 [2024-06-11 08:23:46.510394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.940 qpair failed and we were unable to recover it. 00:31:15.940 [2024-06-11 08:23:46.520231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.940 [2024-06-11 08:23:46.520292] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.940 [2024-06-11 08:23:46.520310] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.940 [2024-06-11 08:23:46.520317] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.940 [2024-06-11 08:23:46.520323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.940 [2024-06-11 08:23:46.520339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.940 qpair failed and we were unable to recover it. 00:31:15.940 [2024-06-11 08:23:46.530220] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.940 [2024-06-11 08:23:46.530307] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.940 [2024-06-11 08:23:46.530326] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.940 [2024-06-11 08:23:46.530333] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.940 [2024-06-11 08:23:46.530339] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.940 [2024-06-11 08:23:46.530355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.940 qpair failed and we were unable to recover it. 00:31:15.940 [2024-06-11 08:23:46.540387] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.940 [2024-06-11 08:23:46.540454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.940 [2024-06-11 08:23:46.540473] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.940 [2024-06-11 08:23:46.540492] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.940 [2024-06-11 08:23:46.540498] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.940 [2024-06-11 08:23:46.540514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.940 qpair failed and we were unable to recover it. 00:31:15.940 [2024-06-11 08:23:46.550400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.940 [2024-06-11 08:23:46.550465] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.940 [2024-06-11 08:23:46.550483] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.940 [2024-06-11 08:23:46.550490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.940 [2024-06-11 08:23:46.550496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.940 [2024-06-11 08:23:46.550512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.940 qpair failed and we were unable to recover it. 00:31:15.940 [2024-06-11 08:23:46.560486] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.940 [2024-06-11 08:23:46.560580] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.940 [2024-06-11 08:23:46.560598] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.940 [2024-06-11 08:23:46.560605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.940 [2024-06-11 08:23:46.560611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.940 [2024-06-11 08:23:46.560627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.940 qpair failed and we were unable to recover it. 00:31:15.940 [2024-06-11 08:23:46.570424] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.940 [2024-06-11 08:23:46.570493] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.940 [2024-06-11 08:23:46.570511] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.940 [2024-06-11 08:23:46.570518] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.940 [2024-06-11 08:23:46.570525] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.940 [2024-06-11 08:23:46.570540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.940 qpair failed and we were unable to recover it. 00:31:15.940 [2024-06-11 08:23:46.580474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:15.940 [2024-06-11 08:23:46.580559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:15.940 [2024-06-11 08:23:46.580577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:15.940 [2024-06-11 08:23:46.580584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:15.940 [2024-06-11 08:23:46.580590] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:15.940 [2024-06-11 08:23:46.580605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:15.940 qpair failed and we were unable to recover it. 00:31:16.203 [2024-06-11 08:23:46.590545] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.203 [2024-06-11 08:23:46.590655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.203 [2024-06-11 08:23:46.590673] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.203 [2024-06-11 08:23:46.590680] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.203 [2024-06-11 08:23:46.590687] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.203 [2024-06-11 08:23:46.590702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.203 qpair failed and we were unable to recover it. 00:31:16.203 [2024-06-11 08:23:46.600551] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.203 [2024-06-11 08:23:46.600608] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.203 [2024-06-11 08:23:46.600626] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.203 [2024-06-11 08:23:46.600633] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.203 [2024-06-11 08:23:46.600639] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.203 [2024-06-11 08:23:46.600654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.203 qpair failed and we were unable to recover it. 00:31:16.203 [2024-06-11 08:23:46.610575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.203 [2024-06-11 08:23:46.610657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.203 [2024-06-11 08:23:46.610674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.203 [2024-06-11 08:23:46.610681] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.203 [2024-06-11 08:23:46.610688] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.203 [2024-06-11 08:23:46.610703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.203 qpair failed and we were unable to recover it. 00:31:16.203 [2024-06-11 08:23:46.620598] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.203 [2024-06-11 08:23:46.620663] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.203 [2024-06-11 08:23:46.620681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.203 [2024-06-11 08:23:46.620688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.203 [2024-06-11 08:23:46.620694] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.203 [2024-06-11 08:23:46.620709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.203 qpair failed and we were unable to recover it. 00:31:16.203 [2024-06-11 08:23:46.630636] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.203 [2024-06-11 08:23:46.630707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.203 [2024-06-11 08:23:46.630730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.203 [2024-06-11 08:23:46.630738] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.203 [2024-06-11 08:23:46.630743] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.203 [2024-06-11 08:23:46.630759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.203 qpair failed and we were unable to recover it. 00:31:16.203 [2024-06-11 08:23:46.640703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.203 [2024-06-11 08:23:46.640762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.203 [2024-06-11 08:23:46.640781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.203 [2024-06-11 08:23:46.640788] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.203 [2024-06-11 08:23:46.640794] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.203 [2024-06-11 08:23:46.640810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.203 qpair failed and we were unable to recover it. 00:31:16.203 [2024-06-11 08:23:46.650691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.203 [2024-06-11 08:23:46.650762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.203 [2024-06-11 08:23:46.650782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.203 [2024-06-11 08:23:46.650789] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.203 [2024-06-11 08:23:46.650795] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.203 [2024-06-11 08:23:46.650811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.203 qpair failed and we were unable to recover it. 00:31:16.203 [2024-06-11 08:23:46.660703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.203 [2024-06-11 08:23:46.660801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.203 [2024-06-11 08:23:46.660819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.203 [2024-06-11 08:23:46.660826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.203 [2024-06-11 08:23:46.660832] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.203 [2024-06-11 08:23:46.660848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.203 qpair failed and we were unable to recover it. 00:31:16.203 [2024-06-11 08:23:46.670638] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.203 [2024-06-11 08:23:46.670702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.203 [2024-06-11 08:23:46.670720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.203 [2024-06-11 08:23:46.670726] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.203 [2024-06-11 08:23:46.670732] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.203 [2024-06-11 08:23:46.670754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.203 qpair failed and we were unable to recover it. 00:31:16.203 [2024-06-11 08:23:46.680867] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.203 [2024-06-11 08:23:46.680966] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.204 [2024-06-11 08:23:46.680984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.204 [2024-06-11 08:23:46.680990] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.204 [2024-06-11 08:23:46.680997] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.204 [2024-06-11 08:23:46.681012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.204 qpair failed and we were unable to recover it. 00:31:16.204 [2024-06-11 08:23:46.690815] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.204 [2024-06-11 08:23:46.690892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.204 [2024-06-11 08:23:46.690910] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.204 [2024-06-11 08:23:46.690918] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.204 [2024-06-11 08:23:46.690924] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.204 [2024-06-11 08:23:46.690939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.204 qpair failed and we were unable to recover it. 00:31:16.204 [2024-06-11 08:23:46.700829] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.204 [2024-06-11 08:23:46.700897] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.204 [2024-06-11 08:23:46.700915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.204 [2024-06-11 08:23:46.700922] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.204 [2024-06-11 08:23:46.700929] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.204 [2024-06-11 08:23:46.700944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.204 qpair failed and we were unable to recover it. 00:31:16.204 [2024-06-11 08:23:46.710759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.204 [2024-06-11 08:23:46.710821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.204 [2024-06-11 08:23:46.710839] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.204 [2024-06-11 08:23:46.710846] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.204 [2024-06-11 08:23:46.710852] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.204 [2024-06-11 08:23:46.710868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.204 qpair failed and we were unable to recover it. 00:31:16.204 [2024-06-11 08:23:46.720945] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.204 [2024-06-11 08:23:46.721006] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.204 [2024-06-11 08:23:46.721030] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.204 [2024-06-11 08:23:46.721037] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.204 [2024-06-11 08:23:46.721043] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.204 [2024-06-11 08:23:46.721058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.204 qpair failed and we were unable to recover it. 00:31:16.204 [2024-06-11 08:23:46.730953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.204 [2024-06-11 08:23:46.731024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.204 [2024-06-11 08:23:46.731042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.204 [2024-06-11 08:23:46.731049] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.204 [2024-06-11 08:23:46.731056] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.204 [2024-06-11 08:23:46.731072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.204 qpair failed and we were unable to recover it. 00:31:16.204 [2024-06-11 08:23:46.740913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.204 [2024-06-11 08:23:46.740987] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.204 [2024-06-11 08:23:46.741005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.204 [2024-06-11 08:23:46.741012] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.204 [2024-06-11 08:23:46.741018] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.204 [2024-06-11 08:23:46.741033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.204 qpair failed and we were unable to recover it. 00:31:16.204 [2024-06-11 08:23:46.751034] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.204 [2024-06-11 08:23:46.751095] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.204 [2024-06-11 08:23:46.751112] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.204 [2024-06-11 08:23:46.751119] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.204 [2024-06-11 08:23:46.751126] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.204 [2024-06-11 08:23:46.751142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.204 qpair failed and we were unable to recover it. 00:31:16.204 [2024-06-11 08:23:46.761059] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.204 [2024-06-11 08:23:46.761115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.204 [2024-06-11 08:23:46.761133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.204 [2024-06-11 08:23:46.761141] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.204 [2024-06-11 08:23:46.761147] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.204 [2024-06-11 08:23:46.761168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.204 qpair failed and we were unable to recover it. 00:31:16.204 [2024-06-11 08:23:46.771085] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.204 [2024-06-11 08:23:46.771154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.204 [2024-06-11 08:23:46.771172] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.204 [2024-06-11 08:23:46.771179] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.204 [2024-06-11 08:23:46.771185] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.204 [2024-06-11 08:23:46.771201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.204 qpair failed and we were unable to recover it. 00:31:16.204 [2024-06-11 08:23:46.781094] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.204 [2024-06-11 08:23:46.781184] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.204 [2024-06-11 08:23:46.781202] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.204 [2024-06-11 08:23:46.781210] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.204 [2024-06-11 08:23:46.781215] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.204 [2024-06-11 08:23:46.781232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.204 qpair failed and we were unable to recover it. 00:31:16.204 [2024-06-11 08:23:46.791120] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.204 [2024-06-11 08:23:46.791177] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.204 [2024-06-11 08:23:46.791196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.204 [2024-06-11 08:23:46.791203] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.204 [2024-06-11 08:23:46.791208] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.204 [2024-06-11 08:23:46.791225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.204 qpair failed and we were unable to recover it. 00:31:16.204 [2024-06-11 08:23:46.801183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.204 [2024-06-11 08:23:46.801256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.204 [2024-06-11 08:23:46.801275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.204 [2024-06-11 08:23:46.801282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.204 [2024-06-11 08:23:46.801288] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.204 [2024-06-11 08:23:46.801304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.204 qpair failed and we were unable to recover it. 00:31:16.204 [2024-06-11 08:23:46.811198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.204 [2024-06-11 08:23:46.811262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.204 [2024-06-11 08:23:46.811288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.204 [2024-06-11 08:23:46.811296] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.204 [2024-06-11 08:23:46.811305] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.205 [2024-06-11 08:23:46.811322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.205 qpair failed and we were unable to recover it. 00:31:16.205 [2024-06-11 08:23:46.821235] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.205 [2024-06-11 08:23:46.821298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.205 [2024-06-11 08:23:46.821318] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.205 [2024-06-11 08:23:46.821325] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.205 [2024-06-11 08:23:46.821331] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.205 [2024-06-11 08:23:46.821347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.205 qpair failed and we were unable to recover it. 00:31:16.205 [2024-06-11 08:23:46.831256] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.205 [2024-06-11 08:23:46.831313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.205 [2024-06-11 08:23:46.831332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.205 [2024-06-11 08:23:46.831340] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.205 [2024-06-11 08:23:46.831346] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.205 [2024-06-11 08:23:46.831362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.205 qpair failed and we were unable to recover it. 00:31:16.205 [2024-06-11 08:23:46.841304] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.205 [2024-06-11 08:23:46.841381] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.205 [2024-06-11 08:23:46.841399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.205 [2024-06-11 08:23:46.841406] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.205 [2024-06-11 08:23:46.841412] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.205 [2024-06-11 08:23:46.841428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.205 qpair failed and we were unable to recover it. 00:31:16.467 [2024-06-11 08:23:46.851285] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.467 [2024-06-11 08:23:46.851348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.467 [2024-06-11 08:23:46.851366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.467 [2024-06-11 08:23:46.851374] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.467 [2024-06-11 08:23:46.851385] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.467 [2024-06-11 08:23:46.851401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.467 qpair failed and we were unable to recover it. 00:31:16.467 [2024-06-11 08:23:46.861318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.467 [2024-06-11 08:23:46.861386] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.467 [2024-06-11 08:23:46.861404] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.467 [2024-06-11 08:23:46.861411] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.467 [2024-06-11 08:23:46.861417] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.467 [2024-06-11 08:23:46.861432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.467 qpair failed and we were unable to recover it. 00:31:16.467 [2024-06-11 08:23:46.871387] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.467 [2024-06-11 08:23:46.871463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.467 [2024-06-11 08:23:46.871481] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.467 [2024-06-11 08:23:46.871487] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.467 [2024-06-11 08:23:46.871494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.467 [2024-06-11 08:23:46.871509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.467 qpair failed and we were unable to recover it. 00:31:16.467 [2024-06-11 08:23:46.881292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.467 [2024-06-11 08:23:46.881356] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.467 [2024-06-11 08:23:46.881374] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.467 [2024-06-11 08:23:46.881381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.467 [2024-06-11 08:23:46.881387] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.467 [2024-06-11 08:23:46.881403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.467 qpair failed and we were unable to recover it. 00:31:16.467 [2024-06-11 08:23:46.891500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.468 [2024-06-11 08:23:46.891577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.468 [2024-06-11 08:23:46.891596] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.468 [2024-06-11 08:23:46.891603] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.468 [2024-06-11 08:23:46.891610] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.468 [2024-06-11 08:23:46.891625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.468 qpair failed and we were unable to recover it. 00:31:16.468 [2024-06-11 08:23:46.901480] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.468 [2024-06-11 08:23:46.901543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.468 [2024-06-11 08:23:46.901562] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.468 [2024-06-11 08:23:46.901569] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.468 [2024-06-11 08:23:46.901575] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.468 [2024-06-11 08:23:46.901591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.468 qpair failed and we were unable to recover it. 00:31:16.468 [2024-06-11 08:23:46.911490] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.468 [2024-06-11 08:23:46.911554] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.468 [2024-06-11 08:23:46.911572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.468 [2024-06-11 08:23:46.911579] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.468 [2024-06-11 08:23:46.911586] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.468 [2024-06-11 08:23:46.911601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.468 qpair failed and we were unable to recover it. 00:31:16.468 [2024-06-11 08:23:46.921605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.468 [2024-06-11 08:23:46.921693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.468 [2024-06-11 08:23:46.921711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.468 [2024-06-11 08:23:46.921718] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.468 [2024-06-11 08:23:46.921724] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.468 [2024-06-11 08:23:46.921741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.468 qpair failed and we were unable to recover it. 00:31:16.468 [2024-06-11 08:23:46.931458] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.468 [2024-06-11 08:23:46.931530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.468 [2024-06-11 08:23:46.931549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.468 [2024-06-11 08:23:46.931556] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.468 [2024-06-11 08:23:46.931562] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.468 [2024-06-11 08:23:46.931577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.468 qpair failed and we were unable to recover it. 00:31:16.468 [2024-06-11 08:23:46.941549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.468 [2024-06-11 08:23:46.941599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.468 [2024-06-11 08:23:46.941617] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.468 [2024-06-11 08:23:46.941624] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.468 [2024-06-11 08:23:46.941635] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.468 [2024-06-11 08:23:46.941650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.468 qpair failed and we were unable to recover it. 00:31:16.468 [2024-06-11 08:23:46.951633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.468 [2024-06-11 08:23:46.951691] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.468 [2024-06-11 08:23:46.951709] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.468 [2024-06-11 08:23:46.951716] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.468 [2024-06-11 08:23:46.951722] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.468 [2024-06-11 08:23:46.951737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.468 qpair failed and we were unable to recover it. 00:31:16.468 [2024-06-11 08:23:46.961659] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.468 [2024-06-11 08:23:46.961713] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.468 [2024-06-11 08:23:46.961731] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.468 [2024-06-11 08:23:46.961738] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.468 [2024-06-11 08:23:46.961744] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.468 [2024-06-11 08:23:46.961759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.468 qpair failed and we were unable to recover it. 00:31:16.468 [2024-06-11 08:23:46.971673] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.468 [2024-06-11 08:23:46.971743] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.468 [2024-06-11 08:23:46.971762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.468 [2024-06-11 08:23:46.971769] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.468 [2024-06-11 08:23:46.971777] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.468 [2024-06-11 08:23:46.971798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.468 qpair failed and we were unable to recover it. 00:31:16.468 [2024-06-11 08:23:46.981669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.468 [2024-06-11 08:23:46.981720] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.468 [2024-06-11 08:23:46.981741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.468 [2024-06-11 08:23:46.981748] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.468 [2024-06-11 08:23:46.981754] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.468 [2024-06-11 08:23:46.981770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.468 qpair failed and we were unable to recover it. 00:31:16.468 [2024-06-11 08:23:46.991754] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.468 [2024-06-11 08:23:46.991815] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.468 [2024-06-11 08:23:46.991831] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.468 [2024-06-11 08:23:46.991838] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.468 [2024-06-11 08:23:46.991844] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.468 [2024-06-11 08:23:46.991859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.468 qpair failed and we were unable to recover it. 00:31:16.468 [2024-06-11 08:23:47.001643] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.468 [2024-06-11 08:23:47.001717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.468 [2024-06-11 08:23:47.001733] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.468 [2024-06-11 08:23:47.001740] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.468 [2024-06-11 08:23:47.001745] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.468 [2024-06-11 08:23:47.001759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.468 qpair failed and we were unable to recover it. 00:31:16.468 [2024-06-11 08:23:47.011758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.469 [2024-06-11 08:23:47.011815] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.469 [2024-06-11 08:23:47.011830] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.469 [2024-06-11 08:23:47.011837] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.469 [2024-06-11 08:23:47.011843] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.469 [2024-06-11 08:23:47.011857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.469 qpair failed and we were unable to recover it. 00:31:16.469 [2024-06-11 08:23:47.021838] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.469 [2024-06-11 08:23:47.021888] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.469 [2024-06-11 08:23:47.021902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.469 [2024-06-11 08:23:47.021909] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.469 [2024-06-11 08:23:47.021915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.469 [2024-06-11 08:23:47.021929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.469 qpair failed and we were unable to recover it. 00:31:16.469 [2024-06-11 08:23:47.031760] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.469 [2024-06-11 08:23:47.031824] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.469 [2024-06-11 08:23:47.031840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.469 [2024-06-11 08:23:47.031853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.469 [2024-06-11 08:23:47.031860] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.469 [2024-06-11 08:23:47.031875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.469 qpair failed and we were unable to recover it. 00:31:16.469 [2024-06-11 08:23:47.041737] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.469 [2024-06-11 08:23:47.041800] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.469 [2024-06-11 08:23:47.041815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.469 [2024-06-11 08:23:47.041823] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.469 [2024-06-11 08:23:47.041829] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.469 [2024-06-11 08:23:47.041843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.469 qpair failed and we were unable to recover it. 00:31:16.469 [2024-06-11 08:23:47.051828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.469 [2024-06-11 08:23:47.051928] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.469 [2024-06-11 08:23:47.051943] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.469 [2024-06-11 08:23:47.051950] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.469 [2024-06-11 08:23:47.051957] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.469 [2024-06-11 08:23:47.051972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.469 qpair failed and we were unable to recover it. 00:31:16.469 [2024-06-11 08:23:47.061889] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.469 [2024-06-11 08:23:47.061934] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.469 [2024-06-11 08:23:47.061949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.469 [2024-06-11 08:23:47.061956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.469 [2024-06-11 08:23:47.061962] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.469 [2024-06-11 08:23:47.061977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.469 qpair failed and we were unable to recover it. 00:31:16.469 [2024-06-11 08:23:47.071942] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.469 [2024-06-11 08:23:47.072005] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.469 [2024-06-11 08:23:47.072020] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.469 [2024-06-11 08:23:47.072027] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.469 [2024-06-11 08:23:47.072033] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.469 [2024-06-11 08:23:47.072046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.469 qpair failed and we were unable to recover it. 00:31:16.469 [2024-06-11 08:23:47.081934] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.469 [2024-06-11 08:23:47.081978] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.469 [2024-06-11 08:23:47.081992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.469 [2024-06-11 08:23:47.081998] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.469 [2024-06-11 08:23:47.082004] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.469 [2024-06-11 08:23:47.082018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.469 qpair failed and we were unable to recover it. 00:31:16.469 [2024-06-11 08:23:47.091952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.469 [2024-06-11 08:23:47.092017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.469 [2024-06-11 08:23:47.092031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.469 [2024-06-11 08:23:47.092037] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.469 [2024-06-11 08:23:47.092043] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.469 [2024-06-11 08:23:47.092057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.469 qpair failed and we were unable to recover it. 00:31:16.469 [2024-06-11 08:23:47.101965] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.469 [2024-06-11 08:23:47.102013] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.469 [2024-06-11 08:23:47.102027] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.469 [2024-06-11 08:23:47.102034] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.469 [2024-06-11 08:23:47.102039] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.469 [2024-06-11 08:23:47.102053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.469 qpair failed and we were unable to recover it. 00:31:16.469 [2024-06-11 08:23:47.111925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.469 [2024-06-11 08:23:47.111983] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.469 [2024-06-11 08:23:47.111998] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.469 [2024-06-11 08:23:47.112005] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.469 [2024-06-11 08:23:47.112011] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.732 [2024-06-11 08:23:47.112030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.732 qpair failed and we were unable to recover it. 00:31:16.732 [2024-06-11 08:23:47.122046] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.732 [2024-06-11 08:23:47.122092] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.732 [2024-06-11 08:23:47.122106] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.732 [2024-06-11 08:23:47.122116] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.732 [2024-06-11 08:23:47.122122] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.732 [2024-06-11 08:23:47.122136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.732 qpair failed and we were unable to recover it. 00:31:16.732 [2024-06-11 08:23:47.132085] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.732 [2024-06-11 08:23:47.132148] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.732 [2024-06-11 08:23:47.132163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.732 [2024-06-11 08:23:47.132170] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.732 [2024-06-11 08:23:47.132176] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.732 [2024-06-11 08:23:47.132189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.732 qpair failed and we were unable to recover it. 00:31:16.732 [2024-06-11 08:23:47.142095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.732 [2024-06-11 08:23:47.142142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.732 [2024-06-11 08:23:47.142156] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.732 [2024-06-11 08:23:47.142163] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.732 [2024-06-11 08:23:47.142169] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.732 [2024-06-11 08:23:47.142182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.732 qpair failed and we were unable to recover it. 00:31:16.732 [2024-06-11 08:23:47.152023] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.732 [2024-06-11 08:23:47.152072] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.732 [2024-06-11 08:23:47.152085] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.732 [2024-06-11 08:23:47.152092] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.732 [2024-06-11 08:23:47.152098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.732 [2024-06-11 08:23:47.152111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.732 qpair failed and we were unable to recover it. 00:31:16.732 [2024-06-11 08:23:47.162184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.732 [2024-06-11 08:23:47.162257] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.732 [2024-06-11 08:23:47.162270] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.732 [2024-06-11 08:23:47.162277] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.732 [2024-06-11 08:23:47.162283] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.732 [2024-06-11 08:23:47.162296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.732 qpair failed and we were unable to recover it. 00:31:16.732 [2024-06-11 08:23:47.172149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.732 [2024-06-11 08:23:47.172199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.732 [2024-06-11 08:23:47.172213] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.732 [2024-06-11 08:23:47.172220] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.732 [2024-06-11 08:23:47.172226] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.732 [2024-06-11 08:23:47.172239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.732 qpair failed and we were unable to recover it. 00:31:16.732 [2024-06-11 08:23:47.182186] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.732 [2024-06-11 08:23:47.182231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.732 [2024-06-11 08:23:47.182245] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.732 [2024-06-11 08:23:47.182252] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.732 [2024-06-11 08:23:47.182258] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.732 [2024-06-11 08:23:47.182271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.732 qpair failed and we were unable to recover it. 00:31:16.732 [2024-06-11 08:23:47.192274] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.732 [2024-06-11 08:23:47.192322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.732 [2024-06-11 08:23:47.192335] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.732 [2024-06-11 08:23:47.192342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.732 [2024-06-11 08:23:47.192348] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.732 [2024-06-11 08:23:47.192361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.732 qpair failed and we were unable to recover it. 00:31:16.732 [2024-06-11 08:23:47.202227] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.732 [2024-06-11 08:23:47.202269] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.732 [2024-06-11 08:23:47.202283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.732 [2024-06-11 08:23:47.202289] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.732 [2024-06-11 08:23:47.202295] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.732 [2024-06-11 08:23:47.202308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.732 qpair failed and we were unable to recover it. 00:31:16.732 [2024-06-11 08:23:47.212277] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.732 [2024-06-11 08:23:47.212324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.733 [2024-06-11 08:23:47.212340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.733 [2024-06-11 08:23:47.212347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.733 [2024-06-11 08:23:47.212353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.733 [2024-06-11 08:23:47.212366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.733 qpair failed and we were unable to recover it. 00:31:16.733 [2024-06-11 08:23:47.222309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.733 [2024-06-11 08:23:47.222366] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.733 [2024-06-11 08:23:47.222380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.733 [2024-06-11 08:23:47.222386] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.733 [2024-06-11 08:23:47.222392] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.733 [2024-06-11 08:23:47.222407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.733 qpair failed and we were unable to recover it. 00:31:16.733 [2024-06-11 08:23:47.232382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.733 [2024-06-11 08:23:47.232430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.733 [2024-06-11 08:23:47.232447] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.733 [2024-06-11 08:23:47.232454] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.733 [2024-06-11 08:23:47.232460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.733 [2024-06-11 08:23:47.232474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.733 qpair failed and we were unable to recover it. 00:31:16.733 [2024-06-11 08:23:47.242325] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.733 [2024-06-11 08:23:47.242372] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.733 [2024-06-11 08:23:47.242385] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.733 [2024-06-11 08:23:47.242392] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.733 [2024-06-11 08:23:47.242398] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.733 [2024-06-11 08:23:47.242411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.733 qpair failed and we were unable to recover it. 00:31:16.733 [2024-06-11 08:23:47.252395] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.733 [2024-06-11 08:23:47.252448] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.733 [2024-06-11 08:23:47.252463] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.733 [2024-06-11 08:23:47.252469] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.733 [2024-06-11 08:23:47.252476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.733 [2024-06-11 08:23:47.252496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.733 qpair failed and we were unable to recover it. 00:31:16.733 [2024-06-11 08:23:47.262383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.733 [2024-06-11 08:23:47.262429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.733 [2024-06-11 08:23:47.262447] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.733 [2024-06-11 08:23:47.262454] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.733 [2024-06-11 08:23:47.262460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.733 [2024-06-11 08:23:47.262474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.733 qpair failed and we were unable to recover it. 00:31:16.733 [2024-06-11 08:23:47.272460] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.733 [2024-06-11 08:23:47.272521] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.733 [2024-06-11 08:23:47.272534] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.733 [2024-06-11 08:23:47.272541] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.733 [2024-06-11 08:23:47.272546] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.733 [2024-06-11 08:23:47.272560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.733 qpair failed and we were unable to recover it. 00:31:16.733 [2024-06-11 08:23:47.282478] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.733 [2024-06-11 08:23:47.282534] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.733 [2024-06-11 08:23:47.282547] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.733 [2024-06-11 08:23:47.282554] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.733 [2024-06-11 08:23:47.282560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.733 [2024-06-11 08:23:47.282574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.733 qpair failed and we were unable to recover it. 00:31:16.733 [2024-06-11 08:23:47.292376] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.733 [2024-06-11 08:23:47.292494] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.733 [2024-06-11 08:23:47.292507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.733 [2024-06-11 08:23:47.292514] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.733 [2024-06-11 08:23:47.292520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.733 [2024-06-11 08:23:47.292534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.733 qpair failed and we were unable to recover it. 00:31:16.733 [2024-06-11 08:23:47.302524] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.733 [2024-06-11 08:23:47.302570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.733 [2024-06-11 08:23:47.302587] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.733 [2024-06-11 08:23:47.302593] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.733 [2024-06-11 08:23:47.302599] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.733 [2024-06-11 08:23:47.302613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.733 qpair failed and we were unable to recover it. 00:31:16.733 [2024-06-11 08:23:47.312566] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.733 [2024-06-11 08:23:47.312613] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.733 [2024-06-11 08:23:47.312627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.733 [2024-06-11 08:23:47.312633] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.733 [2024-06-11 08:23:47.312639] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.733 [2024-06-11 08:23:47.312653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.733 qpair failed and we were unable to recover it. 00:31:16.733 [2024-06-11 08:23:47.322578] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.733 [2024-06-11 08:23:47.322623] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.733 [2024-06-11 08:23:47.322636] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.733 [2024-06-11 08:23:47.322643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.733 [2024-06-11 08:23:47.322649] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.733 [2024-06-11 08:23:47.322662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.733 qpair failed and we were unable to recover it. 00:31:16.733 [2024-06-11 08:23:47.332588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.733 [2024-06-11 08:23:47.332684] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.733 [2024-06-11 08:23:47.332697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.733 [2024-06-11 08:23:47.332704] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.733 [2024-06-11 08:23:47.332710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.733 [2024-06-11 08:23:47.332723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.733 qpair failed and we were unable to recover it. 00:31:16.733 [2024-06-11 08:23:47.342650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.733 [2024-06-11 08:23:47.342692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.733 [2024-06-11 08:23:47.342705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.733 [2024-06-11 08:23:47.342712] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.733 [2024-06-11 08:23:47.342721] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.734 [2024-06-11 08:23:47.342735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.734 qpair failed and we were unable to recover it. 00:31:16.734 [2024-06-11 08:23:47.352691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.734 [2024-06-11 08:23:47.352737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.734 [2024-06-11 08:23:47.352750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.734 [2024-06-11 08:23:47.352757] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.734 [2024-06-11 08:23:47.352763] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.734 [2024-06-11 08:23:47.352776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.734 qpair failed and we were unable to recover it. 00:31:16.734 [2024-06-11 08:23:47.362715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.734 [2024-06-11 08:23:47.362760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.734 [2024-06-11 08:23:47.362773] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.734 [2024-06-11 08:23:47.362780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.734 [2024-06-11 08:23:47.362785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.734 [2024-06-11 08:23:47.362799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.734 qpair failed and we were unable to recover it. 00:31:16.734 [2024-06-11 08:23:47.372725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.734 [2024-06-11 08:23:47.372781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.734 [2024-06-11 08:23:47.372794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.734 [2024-06-11 08:23:47.372800] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.734 [2024-06-11 08:23:47.372806] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.734 [2024-06-11 08:23:47.372819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.734 qpair failed and we were unable to recover it. 00:31:16.996 [2024-06-11 08:23:47.382754] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.996 [2024-06-11 08:23:47.382798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.996 [2024-06-11 08:23:47.382811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.996 [2024-06-11 08:23:47.382818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.996 [2024-06-11 08:23:47.382824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.996 [2024-06-11 08:23:47.382837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.996 qpair failed and we were unable to recover it. 00:31:16.996 [2024-06-11 08:23:47.392813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.996 [2024-06-11 08:23:47.392866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.996 [2024-06-11 08:23:47.392880] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.996 [2024-06-11 08:23:47.392887] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.996 [2024-06-11 08:23:47.392893] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.996 [2024-06-11 08:23:47.392906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.996 qpair failed and we were unable to recover it. 00:31:16.996 [2024-06-11 08:23:47.402676] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.997 [2024-06-11 08:23:47.402725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.997 [2024-06-11 08:23:47.402738] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.997 [2024-06-11 08:23:47.402745] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.997 [2024-06-11 08:23:47.402750] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.997 [2024-06-11 08:23:47.402764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.997 qpair failed and we were unable to recover it. 00:31:16.997 [2024-06-11 08:23:47.412835] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.997 [2024-06-11 08:23:47.412880] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.997 [2024-06-11 08:23:47.412893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.997 [2024-06-11 08:23:47.412900] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.997 [2024-06-11 08:23:47.412906] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.997 [2024-06-11 08:23:47.412919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.997 qpair failed and we were unable to recover it. 00:31:16.997 [2024-06-11 08:23:47.422847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.997 [2024-06-11 08:23:47.422889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.997 [2024-06-11 08:23:47.422903] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.997 [2024-06-11 08:23:47.422909] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.997 [2024-06-11 08:23:47.422915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.997 [2024-06-11 08:23:47.422928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.997 qpair failed and we were unable to recover it. 00:31:16.997 [2024-06-11 08:23:47.432906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.997 [2024-06-11 08:23:47.432958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.997 [2024-06-11 08:23:47.432972] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.997 [2024-06-11 08:23:47.432978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.997 [2024-06-11 08:23:47.432987] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.997 [2024-06-11 08:23:47.433000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.997 qpair failed and we were unable to recover it. 00:31:16.997 [2024-06-11 08:23:47.442905] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.997 [2024-06-11 08:23:47.442950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.997 [2024-06-11 08:23:47.442963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.997 [2024-06-11 08:23:47.442969] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.997 [2024-06-11 08:23:47.442975] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.997 [2024-06-11 08:23:47.442989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.997 qpair failed and we were unable to recover it. 00:31:16.997 [2024-06-11 08:23:47.452914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.997 [2024-06-11 08:23:47.452967] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.997 [2024-06-11 08:23:47.452980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.997 [2024-06-11 08:23:47.452987] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.997 [2024-06-11 08:23:47.452993] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.997 [2024-06-11 08:23:47.453006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.997 qpair failed and we were unable to recover it. 00:31:16.997 [2024-06-11 08:23:47.462841] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.997 [2024-06-11 08:23:47.462889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.997 [2024-06-11 08:23:47.462902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.997 [2024-06-11 08:23:47.462909] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.997 [2024-06-11 08:23:47.462915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.997 [2024-06-11 08:23:47.462928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.997 qpair failed and we were unable to recover it. 00:31:16.997 [2024-06-11 08:23:47.473045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.997 [2024-06-11 08:23:47.473101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.997 [2024-06-11 08:23:47.473116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.997 [2024-06-11 08:23:47.473123] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.997 [2024-06-11 08:23:47.473131] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.997 [2024-06-11 08:23:47.473146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.997 qpair failed and we were unable to recover it. 00:31:16.997 [2024-06-11 08:23:47.482992] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.997 [2024-06-11 08:23:47.483045] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.997 [2024-06-11 08:23:47.483059] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.997 [2024-06-11 08:23:47.483066] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.997 [2024-06-11 08:23:47.483072] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.997 [2024-06-11 08:23:47.483085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.997 qpair failed and we were unable to recover it. 00:31:16.997 [2024-06-11 08:23:47.493070] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.997 [2024-06-11 08:23:47.493195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.997 [2024-06-11 08:23:47.493208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.997 [2024-06-11 08:23:47.493215] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.997 [2024-06-11 08:23:47.493221] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.997 [2024-06-11 08:23:47.493234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.997 qpair failed and we were unable to recover it. 00:31:16.997 [2024-06-11 08:23:47.503051] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.997 [2024-06-11 08:23:47.503092] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.997 [2024-06-11 08:23:47.503106] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.997 [2024-06-11 08:23:47.503113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.997 [2024-06-11 08:23:47.503118] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.997 [2024-06-11 08:23:47.503132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.997 qpair failed and we were unable to recover it. 00:31:16.997 [2024-06-11 08:23:47.513143] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.997 [2024-06-11 08:23:47.513200] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.997 [2024-06-11 08:23:47.513214] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.997 [2024-06-11 08:23:47.513220] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.997 [2024-06-11 08:23:47.513226] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.997 [2024-06-11 08:23:47.513240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.997 qpair failed and we were unable to recover it. 00:31:16.997 [2024-06-11 08:23:47.523146] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.997 [2024-06-11 08:23:47.523196] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.997 [2024-06-11 08:23:47.523210] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.997 [2024-06-11 08:23:47.523220] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.997 [2024-06-11 08:23:47.523226] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.997 [2024-06-11 08:23:47.523239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.997 qpair failed and we were unable to recover it. 00:31:16.997 [2024-06-11 08:23:47.533170] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.997 [2024-06-11 08:23:47.533218] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.997 [2024-06-11 08:23:47.533231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.997 [2024-06-11 08:23:47.533238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.997 [2024-06-11 08:23:47.533244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.998 [2024-06-11 08:23:47.533257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.998 qpair failed and we were unable to recover it. 00:31:16.998 [2024-06-11 08:23:47.543188] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.998 [2024-06-11 08:23:47.543247] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.998 [2024-06-11 08:23:47.543261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.998 [2024-06-11 08:23:47.543268] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.998 [2024-06-11 08:23:47.543275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.998 [2024-06-11 08:23:47.543289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.998 qpair failed and we were unable to recover it. 00:31:16.998 [2024-06-11 08:23:47.553261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.998 [2024-06-11 08:23:47.553339] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.998 [2024-06-11 08:23:47.553353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.998 [2024-06-11 08:23:47.553360] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.998 [2024-06-11 08:23:47.553365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.998 [2024-06-11 08:23:47.553379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.998 qpair failed and we were unable to recover it. 00:31:16.998 [2024-06-11 08:23:47.563256] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.998 [2024-06-11 08:23:47.563297] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.998 [2024-06-11 08:23:47.563311] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.998 [2024-06-11 08:23:47.563317] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.998 [2024-06-11 08:23:47.563323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.998 [2024-06-11 08:23:47.563337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.998 qpair failed and we were unable to recover it. 00:31:16.998 [2024-06-11 08:23:47.573297] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.998 [2024-06-11 08:23:47.573349] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.998 [2024-06-11 08:23:47.573363] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.998 [2024-06-11 08:23:47.573369] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.998 [2024-06-11 08:23:47.573375] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.998 [2024-06-11 08:23:47.573389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.998 qpair failed and we were unable to recover it. 00:31:16.998 [2024-06-11 08:23:47.583318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.998 [2024-06-11 08:23:47.583365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.998 [2024-06-11 08:23:47.583379] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.998 [2024-06-11 08:23:47.583385] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.998 [2024-06-11 08:23:47.583391] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.998 [2024-06-11 08:23:47.583404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.998 qpair failed and we were unable to recover it. 00:31:16.998 [2024-06-11 08:23:47.593271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.998 [2024-06-11 08:23:47.593321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.998 [2024-06-11 08:23:47.593335] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.998 [2024-06-11 08:23:47.593342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.998 [2024-06-11 08:23:47.593348] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.998 [2024-06-11 08:23:47.593362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.998 qpair failed and we were unable to recover it. 00:31:16.998 [2024-06-11 08:23:47.603349] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.998 [2024-06-11 08:23:47.603397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.998 [2024-06-11 08:23:47.603411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.998 [2024-06-11 08:23:47.603417] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.998 [2024-06-11 08:23:47.603423] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.998 [2024-06-11 08:23:47.603443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.998 qpair failed and we were unable to recover it. 00:31:16.998 [2024-06-11 08:23:47.613395] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.998 [2024-06-11 08:23:47.613451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.998 [2024-06-11 08:23:47.613465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.998 [2024-06-11 08:23:47.613475] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.998 [2024-06-11 08:23:47.613481] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.998 [2024-06-11 08:23:47.613495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.998 qpair failed and we were unable to recover it. 00:31:16.998 [2024-06-11 08:23:47.623402] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.998 [2024-06-11 08:23:47.623449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.998 [2024-06-11 08:23:47.623463] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.998 [2024-06-11 08:23:47.623469] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.998 [2024-06-11 08:23:47.623476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.998 [2024-06-11 08:23:47.623489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.998 qpair failed and we were unable to recover it. 00:31:16.998 [2024-06-11 08:23:47.633503] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:16.998 [2024-06-11 08:23:47.633558] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:16.998 [2024-06-11 08:23:47.633571] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:16.998 [2024-06-11 08:23:47.633578] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:16.998 [2024-06-11 08:23:47.633584] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:16.998 [2024-06-11 08:23:47.633598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:16.998 qpair failed and we were unable to recover it. 00:31:17.261 [2024-06-11 08:23:47.643499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.261 [2024-06-11 08:23:47.643548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.261 [2024-06-11 08:23:47.643561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.261 [2024-06-11 08:23:47.643567] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.261 [2024-06-11 08:23:47.643573] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.261 [2024-06-11 08:23:47.643587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.261 qpair failed and we were unable to recover it. 00:31:17.261 [2024-06-11 08:23:47.653497] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.261 [2024-06-11 08:23:47.653587] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.261 [2024-06-11 08:23:47.653600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.261 [2024-06-11 08:23:47.653606] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.261 [2024-06-11 08:23:47.653612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.261 [2024-06-11 08:23:47.653626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.261 qpair failed and we were unable to recover it. 00:31:17.261 [2024-06-11 08:23:47.663552] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.261 [2024-06-11 08:23:47.663603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.261 [2024-06-11 08:23:47.663617] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.261 [2024-06-11 08:23:47.663623] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.261 [2024-06-11 08:23:47.663629] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.261 [2024-06-11 08:23:47.663643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.261 qpair failed and we were unable to recover it. 00:31:17.261 [2024-06-11 08:23:47.673489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.261 [2024-06-11 08:23:47.673535] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.261 [2024-06-11 08:23:47.673548] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.261 [2024-06-11 08:23:47.673555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.261 [2024-06-11 08:23:47.673561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.261 [2024-06-11 08:23:47.673574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.261 qpair failed and we were unable to recover it. 00:31:17.261 [2024-06-11 08:23:47.683619] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.261 [2024-06-11 08:23:47.683676] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.261 [2024-06-11 08:23:47.683689] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.261 [2024-06-11 08:23:47.683696] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.261 [2024-06-11 08:23:47.683702] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.261 [2024-06-11 08:23:47.683715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.261 qpair failed and we were unable to recover it. 00:31:17.261 [2024-06-11 08:23:47.693621] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.261 [2024-06-11 08:23:47.693691] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.261 [2024-06-11 08:23:47.693704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.261 [2024-06-11 08:23:47.693711] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.261 [2024-06-11 08:23:47.693717] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.261 [2024-06-11 08:23:47.693730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.261 qpair failed and we were unable to recover it. 00:31:17.261 [2024-06-11 08:23:47.703648] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.261 [2024-06-11 08:23:47.703696] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.261 [2024-06-11 08:23:47.703712] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.261 [2024-06-11 08:23:47.703719] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.261 [2024-06-11 08:23:47.703725] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.261 [2024-06-11 08:23:47.703739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.261 qpair failed and we were unable to recover it. 00:31:17.261 [2024-06-11 08:23:47.713729] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.261 [2024-06-11 08:23:47.713779] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.261 [2024-06-11 08:23:47.713792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.261 [2024-06-11 08:23:47.713799] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.261 [2024-06-11 08:23:47.713805] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.261 [2024-06-11 08:23:47.713818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.261 qpair failed and we were unable to recover it. 00:31:17.261 [2024-06-11 08:23:47.723591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.261 [2024-06-11 08:23:47.723637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-06-11 08:23:47.723650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.262 [2024-06-11 08:23:47.723657] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.262 [2024-06-11 08:23:47.723663] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.262 [2024-06-11 08:23:47.723676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.262 qpair failed and we were unable to recover it. 00:31:17.262 [2024-06-11 08:23:47.733614] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.262 [2024-06-11 08:23:47.733668] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-06-11 08:23:47.733682] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.262 [2024-06-11 08:23:47.733688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.262 [2024-06-11 08:23:47.733694] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.262 [2024-06-11 08:23:47.733707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.262 qpair failed and we were unable to recover it. 00:31:17.262 [2024-06-11 08:23:47.743755] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.262 [2024-06-11 08:23:47.743798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-06-11 08:23:47.743811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.262 [2024-06-11 08:23:47.743818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.262 [2024-06-11 08:23:47.743824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.262 [2024-06-11 08:23:47.743841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.262 qpair failed and we were unable to recover it. 00:31:17.262 [2024-06-11 08:23:47.753847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.262 [2024-06-11 08:23:47.753895] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-06-11 08:23:47.753908] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.262 [2024-06-11 08:23:47.753915] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.262 [2024-06-11 08:23:47.753920] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.262 [2024-06-11 08:23:47.753934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.262 qpair failed and we were unable to recover it. 00:31:17.262 [2024-06-11 08:23:47.763747] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.262 [2024-06-11 08:23:47.763791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-06-11 08:23:47.763806] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.262 [2024-06-11 08:23:47.763812] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.262 [2024-06-11 08:23:47.763818] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.262 [2024-06-11 08:23:47.763832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.262 qpair failed and we were unable to recover it. 00:31:17.262 [2024-06-11 08:23:47.773831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.262 [2024-06-11 08:23:47.773924] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-06-11 08:23:47.773938] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.262 [2024-06-11 08:23:47.773944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.262 [2024-06-11 08:23:47.773951] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.262 [2024-06-11 08:23:47.773964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.262 qpair failed and we were unable to recover it. 00:31:17.262 [2024-06-11 08:23:47.783868] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.262 [2024-06-11 08:23:47.783915] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-06-11 08:23:47.783929] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.262 [2024-06-11 08:23:47.783935] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.262 [2024-06-11 08:23:47.783941] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.262 [2024-06-11 08:23:47.783955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.262 qpair failed and we were unable to recover it. 00:31:17.262 [2024-06-11 08:23:47.793936] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.262 [2024-06-11 08:23:47.793989] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-06-11 08:23:47.794006] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.262 [2024-06-11 08:23:47.794013] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.262 [2024-06-11 08:23:47.794019] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.262 [2024-06-11 08:23:47.794032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.262 qpair failed and we were unable to recover it. 00:31:17.262 [2024-06-11 08:23:47.803939] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.262 [2024-06-11 08:23:47.803985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-06-11 08:23:47.803999] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.262 [2024-06-11 08:23:47.804006] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.262 [2024-06-11 08:23:47.804011] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.262 [2024-06-11 08:23:47.804025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.262 qpair failed and we were unable to recover it. 00:31:17.262 [2024-06-11 08:23:47.813965] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.262 [2024-06-11 08:23:47.814059] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-06-11 08:23:47.814073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.262 [2024-06-11 08:23:47.814079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.262 [2024-06-11 08:23:47.814085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.262 [2024-06-11 08:23:47.814099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.262 qpair failed and we were unable to recover it. 00:31:17.262 [2024-06-11 08:23:47.823977] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.262 [2024-06-11 08:23:47.824020] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-06-11 08:23:47.824034] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.262 [2024-06-11 08:23:47.824040] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.262 [2024-06-11 08:23:47.824046] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.262 [2024-06-11 08:23:47.824060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.262 qpair failed and we were unable to recover it. 00:31:17.262 [2024-06-11 08:23:47.834038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.262 [2024-06-11 08:23:47.834088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-06-11 08:23:47.834101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.262 [2024-06-11 08:23:47.834108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.262 [2024-06-11 08:23:47.834114] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.262 [2024-06-11 08:23:47.834131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.262 qpair failed and we were unable to recover it. 00:31:17.262 [2024-06-11 08:23:47.844027] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.262 [2024-06-11 08:23:47.844072] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-06-11 08:23:47.844086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.262 [2024-06-11 08:23:47.844092] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.262 [2024-06-11 08:23:47.844098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.262 [2024-06-11 08:23:47.844112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.262 qpair failed and we were unable to recover it. 00:31:17.262 [2024-06-11 08:23:47.853946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.262 [2024-06-11 08:23:47.854007] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.262 [2024-06-11 08:23:47.854020] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.263 [2024-06-11 08:23:47.854027] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.263 [2024-06-11 08:23:47.854032] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.263 [2024-06-11 08:23:47.854046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.263 qpair failed and we were unable to recover it. 00:31:17.263 [2024-06-11 08:23:47.864067] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.263 [2024-06-11 08:23:47.864112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.263 [2024-06-11 08:23:47.864125] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.263 [2024-06-11 08:23:47.864132] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.263 [2024-06-11 08:23:47.864138] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.263 [2024-06-11 08:23:47.864151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.263 qpair failed and we were unable to recover it. 00:31:17.263 [2024-06-11 08:23:47.874132] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.263 [2024-06-11 08:23:47.874179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.263 [2024-06-11 08:23:47.874192] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.263 [2024-06-11 08:23:47.874199] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.263 [2024-06-11 08:23:47.874205] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.263 [2024-06-11 08:23:47.874218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.263 qpair failed and we were unable to recover it. 00:31:17.263 [2024-06-11 08:23:47.884165] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.263 [2024-06-11 08:23:47.884213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.263 [2024-06-11 08:23:47.884230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.263 [2024-06-11 08:23:47.884236] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.263 [2024-06-11 08:23:47.884242] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.263 [2024-06-11 08:23:47.884255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.263 qpair failed and we were unable to recover it. 00:31:17.263 [2024-06-11 08:23:47.894169] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.263 [2024-06-11 08:23:47.894228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.263 [2024-06-11 08:23:47.894242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.263 [2024-06-11 08:23:47.894249] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.263 [2024-06-11 08:23:47.894255] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.263 [2024-06-11 08:23:47.894268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.263 qpair failed and we were unable to recover it. 00:31:17.263 [2024-06-11 08:23:47.904201] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.263 [2024-06-11 08:23:47.904249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.263 [2024-06-11 08:23:47.904263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.263 [2024-06-11 08:23:47.904269] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.263 [2024-06-11 08:23:47.904275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.263 [2024-06-11 08:23:47.904289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.263 qpair failed and we were unable to recover it. 00:31:17.525 [2024-06-11 08:23:47.914285] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.525 [2024-06-11 08:23:47.914377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.525 [2024-06-11 08:23:47.914391] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.525 [2024-06-11 08:23:47.914398] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.525 [2024-06-11 08:23:47.914404] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.525 [2024-06-11 08:23:47.914417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.525 qpair failed and we were unable to recover it. 00:31:17.525 [2024-06-11 08:23:47.924257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.525 [2024-06-11 08:23:47.924351] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.525 [2024-06-11 08:23:47.924365] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.525 [2024-06-11 08:23:47.924372] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.525 [2024-06-11 08:23:47.924385] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.525 [2024-06-11 08:23:47.924398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.525 qpair failed and we were unable to recover it. 00:31:17.525 [2024-06-11 08:23:47.934157] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.525 [2024-06-11 08:23:47.934201] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.525 [2024-06-11 08:23:47.934215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.525 [2024-06-11 08:23:47.934222] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.525 [2024-06-11 08:23:47.934228] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.525 [2024-06-11 08:23:47.934246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.525 qpair failed and we were unable to recover it. 00:31:17.525 [2024-06-11 08:23:47.944176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.525 [2024-06-11 08:23:47.944216] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.525 [2024-06-11 08:23:47.944229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.525 [2024-06-11 08:23:47.944236] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.525 [2024-06-11 08:23:47.944242] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.525 [2024-06-11 08:23:47.944255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.525 qpair failed and we were unable to recover it. 00:31:17.525 [2024-06-11 08:23:47.954334] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.525 [2024-06-11 08:23:47.954390] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.525 [2024-06-11 08:23:47.954403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.525 [2024-06-11 08:23:47.954410] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.525 [2024-06-11 08:23:47.954416] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.525 [2024-06-11 08:23:47.954429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.525 qpair failed and we were unable to recover it. 00:31:17.525 [2024-06-11 08:23:47.964340] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.525 [2024-06-11 08:23:47.964383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.525 [2024-06-11 08:23:47.964397] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.525 [2024-06-11 08:23:47.964403] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.525 [2024-06-11 08:23:47.964409] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.525 [2024-06-11 08:23:47.964422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.525 qpair failed and we were unable to recover it. 00:31:17.525 [2024-06-11 08:23:47.974258] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.525 [2024-06-11 08:23:47.974308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.525 [2024-06-11 08:23:47.974322] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.525 [2024-06-11 08:23:47.974328] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.525 [2024-06-11 08:23:47.974334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.525 [2024-06-11 08:23:47.974347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.525 qpair failed and we were unable to recover it. 00:31:17.525 [2024-06-11 08:23:47.984435] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.525 [2024-06-11 08:23:47.984483] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.525 [2024-06-11 08:23:47.984497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.525 [2024-06-11 08:23:47.984504] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.525 [2024-06-11 08:23:47.984510] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.525 [2024-06-11 08:23:47.984524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.525 qpair failed and we were unable to recover it. 00:31:17.525 [2024-06-11 08:23:47.994489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.525 [2024-06-11 08:23:47.994544] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.525 [2024-06-11 08:23:47.994558] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.525 [2024-06-11 08:23:47.994565] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.525 [2024-06-11 08:23:47.994571] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.525 [2024-06-11 08:23:47.994584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.525 qpair failed and we were unable to recover it. 00:31:17.525 [2024-06-11 08:23:48.004471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.525 [2024-06-11 08:23:48.004517] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.525 [2024-06-11 08:23:48.004530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.525 [2024-06-11 08:23:48.004537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.525 [2024-06-11 08:23:48.004543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.525 [2024-06-11 08:23:48.004556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.525 qpair failed and we were unable to recover it. 00:31:17.525 [2024-06-11 08:23:48.014382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.525 [2024-06-11 08:23:48.014434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.525 [2024-06-11 08:23:48.014452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.525 [2024-06-11 08:23:48.014463] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.525 [2024-06-11 08:23:48.014469] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.525 [2024-06-11 08:23:48.014483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.525 qpair failed and we were unable to recover it. 00:31:17.525 [2024-06-11 08:23:48.024535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.525 [2024-06-11 08:23:48.024581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.525 [2024-06-11 08:23:48.024594] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.525 [2024-06-11 08:23:48.024601] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.525 [2024-06-11 08:23:48.024607] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.525 [2024-06-11 08:23:48.024620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.525 qpair failed and we were unable to recover it. 00:31:17.525 [2024-06-11 08:23:48.034569] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.525 [2024-06-11 08:23:48.034617] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.525 [2024-06-11 08:23:48.034632] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.525 [2024-06-11 08:23:48.034639] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.525 [2024-06-11 08:23:48.034645] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.525 [2024-06-11 08:23:48.034661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.525 qpair failed and we were unable to recover it. 00:31:17.525 [2024-06-11 08:23:48.044609] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.525 [2024-06-11 08:23:48.044654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.525 [2024-06-11 08:23:48.044669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.525 [2024-06-11 08:23:48.044675] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.525 [2024-06-11 08:23:48.044681] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.525 [2024-06-11 08:23:48.044695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.525 qpair failed and we were unable to recover it. 00:31:17.525 [2024-06-11 08:23:48.054627] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.525 [2024-06-11 08:23:48.054672] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.525 [2024-06-11 08:23:48.054686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.525 [2024-06-11 08:23:48.054692] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.525 [2024-06-11 08:23:48.054698] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.525 [2024-06-11 08:23:48.054711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.525 qpair failed and we were unable to recover it. 00:31:17.525 [2024-06-11 08:23:48.064633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.525 [2024-06-11 08:23:48.064678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.525 [2024-06-11 08:23:48.064691] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.525 [2024-06-11 08:23:48.064698] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.525 [2024-06-11 08:23:48.064704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.525 [2024-06-11 08:23:48.064718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.525 qpair failed and we were unable to recover it. 00:31:17.525 [2024-06-11 08:23:48.074706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.525 [2024-06-11 08:23:48.074755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.525 [2024-06-11 08:23:48.074768] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.525 [2024-06-11 08:23:48.074775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.525 [2024-06-11 08:23:48.074781] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.525 [2024-06-11 08:23:48.074794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.525 qpair failed and we were unable to recover it. 00:31:17.525 [2024-06-11 08:23:48.084699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.525 [2024-06-11 08:23:48.084744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.525 [2024-06-11 08:23:48.084758] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.526 [2024-06-11 08:23:48.084764] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.526 [2024-06-11 08:23:48.084770] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.526 [2024-06-11 08:23:48.084784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.526 qpair failed and we were unable to recover it. 00:31:17.526 [2024-06-11 08:23:48.094727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.526 [2024-06-11 08:23:48.094773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.526 [2024-06-11 08:23:48.094786] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.526 [2024-06-11 08:23:48.094792] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.526 [2024-06-11 08:23:48.094798] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.526 [2024-06-11 08:23:48.094812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.526 qpair failed and we were unable to recover it. 00:31:17.526 [2024-06-11 08:23:48.104770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.526 [2024-06-11 08:23:48.104828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.526 [2024-06-11 08:23:48.104842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.526 [2024-06-11 08:23:48.104852] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.526 [2024-06-11 08:23:48.104858] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.526 [2024-06-11 08:23:48.104871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.526 qpair failed and we were unable to recover it. 00:31:17.526 [2024-06-11 08:23:48.114814] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.526 [2024-06-11 08:23:48.114866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.526 [2024-06-11 08:23:48.114879] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.526 [2024-06-11 08:23:48.114886] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.526 [2024-06-11 08:23:48.114892] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.526 [2024-06-11 08:23:48.114906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.526 qpair failed and we were unable to recover it. 00:31:17.526 [2024-06-11 08:23:48.124856] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.526 [2024-06-11 08:23:48.124935] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.526 [2024-06-11 08:23:48.124948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.526 [2024-06-11 08:23:48.124954] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.526 [2024-06-11 08:23:48.124961] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.526 [2024-06-11 08:23:48.124974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.526 qpair failed and we were unable to recover it. 00:31:17.526 [2024-06-11 08:23:48.134826] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.526 [2024-06-11 08:23:48.134926] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.526 [2024-06-11 08:23:48.134939] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.526 [2024-06-11 08:23:48.134946] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.526 [2024-06-11 08:23:48.134952] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.526 [2024-06-11 08:23:48.134965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.526 qpair failed and we were unable to recover it. 00:31:17.526 [2024-06-11 08:23:48.144862] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.526 [2024-06-11 08:23:48.144911] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.526 [2024-06-11 08:23:48.144924] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.526 [2024-06-11 08:23:48.144931] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.526 [2024-06-11 08:23:48.144936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.526 [2024-06-11 08:23:48.144950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.526 qpair failed and we were unable to recover it. 00:31:17.526 [2024-06-11 08:23:48.154898] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.526 [2024-06-11 08:23:48.154949] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.526 [2024-06-11 08:23:48.154962] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.526 [2024-06-11 08:23:48.154968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.526 [2024-06-11 08:23:48.154975] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.526 [2024-06-11 08:23:48.154988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.526 qpair failed and we were unable to recover it. 00:31:17.526 [2024-06-11 08:23:48.164917] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.526 [2024-06-11 08:23:48.164963] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.526 [2024-06-11 08:23:48.164976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.526 [2024-06-11 08:23:48.164983] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.526 [2024-06-11 08:23:48.164989] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.526 [2024-06-11 08:23:48.165002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.526 qpair failed and we were unable to recover it. 00:31:17.788 [2024-06-11 08:23:48.174924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.788 [2024-06-11 08:23:48.174973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.788 [2024-06-11 08:23:48.174986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.788 [2024-06-11 08:23:48.174993] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.788 [2024-06-11 08:23:48.174999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.788 [2024-06-11 08:23:48.175012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.788 qpair failed and we were unable to recover it. 00:31:17.788 [2024-06-11 08:23:48.184913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.788 [2024-06-11 08:23:48.184959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.788 [2024-06-11 08:23:48.184972] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.788 [2024-06-11 08:23:48.184979] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.788 [2024-06-11 08:23:48.184985] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.788 [2024-06-11 08:23:48.184998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.788 qpair failed and we were unable to recover it. 00:31:17.788 [2024-06-11 08:23:48.195029] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.788 [2024-06-11 08:23:48.195076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.788 [2024-06-11 08:23:48.195092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.788 [2024-06-11 08:23:48.195099] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.788 [2024-06-11 08:23:48.195104] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.788 [2024-06-11 08:23:48.195118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.788 qpair failed and we were unable to recover it. 00:31:17.788 [2024-06-11 08:23:48.205016] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.788 [2024-06-11 08:23:48.205061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.788 [2024-06-11 08:23:48.205074] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.788 [2024-06-11 08:23:48.205081] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.788 [2024-06-11 08:23:48.205087] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.788 [2024-06-11 08:23:48.205100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.788 qpair failed and we were unable to recover it. 00:31:17.788 [2024-06-11 08:23:48.215032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.788 [2024-06-11 08:23:48.215088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.788 [2024-06-11 08:23:48.215102] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.788 [2024-06-11 08:23:48.215108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.788 [2024-06-11 08:23:48.215114] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.788 [2024-06-11 08:23:48.215129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.788 qpair failed and we were unable to recover it. 00:31:17.788 [2024-06-11 08:23:48.225065] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.788 [2024-06-11 08:23:48.225112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.788 [2024-06-11 08:23:48.225126] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.788 [2024-06-11 08:23:48.225132] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.788 [2024-06-11 08:23:48.225138] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.788 [2024-06-11 08:23:48.225152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.788 qpair failed and we were unable to recover it. 00:31:17.788 [2024-06-11 08:23:48.235133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.788 [2024-06-11 08:23:48.235186] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.788 [2024-06-11 08:23:48.235200] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.788 [2024-06-11 08:23:48.235207] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.788 [2024-06-11 08:23:48.235213] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.788 [2024-06-11 08:23:48.235230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.788 qpair failed and we were unable to recover it. 00:31:17.788 [2024-06-11 08:23:48.245131] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.788 [2024-06-11 08:23:48.245175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.788 [2024-06-11 08:23:48.245188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.788 [2024-06-11 08:23:48.245195] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.788 [2024-06-11 08:23:48.245201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.788 [2024-06-11 08:23:48.245214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.788 qpair failed and we were unable to recover it. 00:31:17.788 [2024-06-11 08:23:48.255144] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.788 [2024-06-11 08:23:48.255190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.788 [2024-06-11 08:23:48.255203] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.788 [2024-06-11 08:23:48.255210] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.788 [2024-06-11 08:23:48.255216] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.788 [2024-06-11 08:23:48.255229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.788 qpair failed and we were unable to recover it. 00:31:17.788 [2024-06-11 08:23:48.265171] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.788 [2024-06-11 08:23:48.265214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.788 [2024-06-11 08:23:48.265227] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.788 [2024-06-11 08:23:48.265234] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.788 [2024-06-11 08:23:48.265240] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.788 [2024-06-11 08:23:48.265253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.788 qpair failed and we were unable to recover it. 00:31:17.788 [2024-06-11 08:23:48.275283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.788 [2024-06-11 08:23:48.275358] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.788 [2024-06-11 08:23:48.275371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.788 [2024-06-11 08:23:48.275378] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.788 [2024-06-11 08:23:48.275384] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.788 [2024-06-11 08:23:48.275397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.788 qpair failed and we were unable to recover it. 00:31:17.788 [2024-06-11 08:23:48.285232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.788 [2024-06-11 08:23:48.285285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.788 [2024-06-11 08:23:48.285302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.788 [2024-06-11 08:23:48.285309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.788 [2024-06-11 08:23:48.285315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.788 [2024-06-11 08:23:48.285328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.788 qpair failed and we were unable to recover it. 00:31:17.788 [2024-06-11 08:23:48.295270] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.789 [2024-06-11 08:23:48.295317] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.789 [2024-06-11 08:23:48.295330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.789 [2024-06-11 08:23:48.295337] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.789 [2024-06-11 08:23:48.295342] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.789 [2024-06-11 08:23:48.295355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.789 qpair failed and we were unable to recover it. 00:31:17.789 [2024-06-11 08:23:48.305280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.789 [2024-06-11 08:23:48.305320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.789 [2024-06-11 08:23:48.305334] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.789 [2024-06-11 08:23:48.305340] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.789 [2024-06-11 08:23:48.305346] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.789 [2024-06-11 08:23:48.305359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.789 qpair failed and we were unable to recover it. 00:31:17.789 [2024-06-11 08:23:48.315352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.789 [2024-06-11 08:23:48.315400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.789 [2024-06-11 08:23:48.315413] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.789 [2024-06-11 08:23:48.315420] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.789 [2024-06-11 08:23:48.315426] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.789 [2024-06-11 08:23:48.315443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.789 qpair failed and we were unable to recover it. 00:31:17.789 [2024-06-11 08:23:48.325209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.789 [2024-06-11 08:23:48.325255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.789 [2024-06-11 08:23:48.325268] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.789 [2024-06-11 08:23:48.325274] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.789 [2024-06-11 08:23:48.325280] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.789 [2024-06-11 08:23:48.325297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.789 qpair failed and we were unable to recover it. 00:31:17.789 [2024-06-11 08:23:48.335372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.789 [2024-06-11 08:23:48.335423] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.789 [2024-06-11 08:23:48.335436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.789 [2024-06-11 08:23:48.335447] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.789 [2024-06-11 08:23:48.335453] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.789 [2024-06-11 08:23:48.335466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.789 qpair failed and we were unable to recover it. 00:31:17.789 [2024-06-11 08:23:48.345375] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.789 [2024-06-11 08:23:48.345415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.789 [2024-06-11 08:23:48.345428] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.789 [2024-06-11 08:23:48.345435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.789 [2024-06-11 08:23:48.345445] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.789 [2024-06-11 08:23:48.345459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.789 qpair failed and we were unable to recover it. 00:31:17.789 [2024-06-11 08:23:48.355347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.789 [2024-06-11 08:23:48.355444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.789 [2024-06-11 08:23:48.355457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.789 [2024-06-11 08:23:48.355464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.789 [2024-06-11 08:23:48.355470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.789 [2024-06-11 08:23:48.355484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.789 qpair failed and we were unable to recover it. 00:31:17.789 [2024-06-11 08:23:48.365462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.789 [2024-06-11 08:23:48.365507] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.789 [2024-06-11 08:23:48.365521] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.789 [2024-06-11 08:23:48.365527] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.789 [2024-06-11 08:23:48.365533] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.789 [2024-06-11 08:23:48.365546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.789 qpair failed and we were unable to recover it. 00:31:17.789 [2024-06-11 08:23:48.375484] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.789 [2024-06-11 08:23:48.375532] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.789 [2024-06-11 08:23:48.375549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.789 [2024-06-11 08:23:48.375555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.789 [2024-06-11 08:23:48.375561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.789 [2024-06-11 08:23:48.375575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.789 qpair failed and we were unable to recover it. 00:31:17.789 [2024-06-11 08:23:48.385398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.789 [2024-06-11 08:23:48.385444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.789 [2024-06-11 08:23:48.385458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.789 [2024-06-11 08:23:48.385464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.789 [2024-06-11 08:23:48.385470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.789 [2024-06-11 08:23:48.385483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.789 qpair failed and we were unable to recover it. 00:31:17.789 [2024-06-11 08:23:48.395444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.789 [2024-06-11 08:23:48.395486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.789 [2024-06-11 08:23:48.395500] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.789 [2024-06-11 08:23:48.395507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.789 [2024-06-11 08:23:48.395513] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.789 [2024-06-11 08:23:48.395526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.789 qpair failed and we were unable to recover it. 00:31:17.789 [2024-06-11 08:23:48.405544] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.789 [2024-06-11 08:23:48.405589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.789 [2024-06-11 08:23:48.405603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.789 [2024-06-11 08:23:48.405609] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.789 [2024-06-11 08:23:48.405615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.789 [2024-06-11 08:23:48.405628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.789 qpair failed and we were unable to recover it. 00:31:17.789 [2024-06-11 08:23:48.415604] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.789 [2024-06-11 08:23:48.415648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.789 [2024-06-11 08:23:48.415661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.789 [2024-06-11 08:23:48.415668] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.789 [2024-06-11 08:23:48.415677] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.789 [2024-06-11 08:23:48.415691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.789 qpair failed and we were unable to recover it. 00:31:17.789 [2024-06-11 08:23:48.425629] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:17.789 [2024-06-11 08:23:48.425679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:17.789 [2024-06-11 08:23:48.425692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:17.789 [2024-06-11 08:23:48.425699] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:17.790 [2024-06-11 08:23:48.425704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:17.790 [2024-06-11 08:23:48.425718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:17.790 qpair failed and we were unable to recover it. 00:31:18.051 [2024-06-11 08:23:48.435664] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.051 [2024-06-11 08:23:48.435707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.051 [2024-06-11 08:23:48.435721] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.051 [2024-06-11 08:23:48.435727] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.051 [2024-06-11 08:23:48.435733] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.051 [2024-06-11 08:23:48.435747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.051 qpair failed and we were unable to recover it. 00:31:18.051 [2024-06-11 08:23:48.445694] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.051 [2024-06-11 08:23:48.445737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.051 [2024-06-11 08:23:48.445750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.051 [2024-06-11 08:23:48.445757] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.051 [2024-06-11 08:23:48.445763] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.051 [2024-06-11 08:23:48.445776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.051 qpair failed and we were unable to recover it. 00:31:18.051 [2024-06-11 08:23:48.455714] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.051 [2024-06-11 08:23:48.455762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.051 [2024-06-11 08:23:48.455775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.051 [2024-06-11 08:23:48.455782] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.051 [2024-06-11 08:23:48.455788] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.051 [2024-06-11 08:23:48.455801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.051 qpair failed and we were unable to recover it. 00:31:18.051 [2024-06-11 08:23:48.465765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.051 [2024-06-11 08:23:48.465835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.051 [2024-06-11 08:23:48.465849] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.051 [2024-06-11 08:23:48.465856] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.051 [2024-06-11 08:23:48.465862] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.051 [2024-06-11 08:23:48.465875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.051 qpair failed and we were unable to recover it. 00:31:18.051 [2024-06-11 08:23:48.475746] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.051 [2024-06-11 08:23:48.475790] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.052 [2024-06-11 08:23:48.475803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.052 [2024-06-11 08:23:48.475809] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.052 [2024-06-11 08:23:48.475815] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.052 [2024-06-11 08:23:48.475829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.052 qpair failed and we were unable to recover it. 00:31:18.052 [2024-06-11 08:23:48.485797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.052 [2024-06-11 08:23:48.485865] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.052 [2024-06-11 08:23:48.485878] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.052 [2024-06-11 08:23:48.485885] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.052 [2024-06-11 08:23:48.485891] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.052 [2024-06-11 08:23:48.485904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.052 qpair failed and we were unable to recover it. 00:31:18.052 [2024-06-11 08:23:48.495841] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.052 [2024-06-11 08:23:48.495909] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.052 [2024-06-11 08:23:48.495923] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.052 [2024-06-11 08:23:48.495929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.052 [2024-06-11 08:23:48.495935] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.052 [2024-06-11 08:23:48.495948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.052 qpair failed and we were unable to recover it. 00:31:18.052 [2024-06-11 08:23:48.505839] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.052 [2024-06-11 08:23:48.505886] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.052 [2024-06-11 08:23:48.505899] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.052 [2024-06-11 08:23:48.505906] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.052 [2024-06-11 08:23:48.505915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.052 [2024-06-11 08:23:48.505928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.052 qpair failed and we were unable to recover it. 00:31:18.052 [2024-06-11 08:23:48.515743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.052 [2024-06-11 08:23:48.515794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.052 [2024-06-11 08:23:48.515808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.052 [2024-06-11 08:23:48.515814] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.052 [2024-06-11 08:23:48.515821] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.052 [2024-06-11 08:23:48.515834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.052 qpair failed and we were unable to recover it. 00:31:18.052 [2024-06-11 08:23:48.525908] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.052 [2024-06-11 08:23:48.525952] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.052 [2024-06-11 08:23:48.525966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.052 [2024-06-11 08:23:48.525974] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.052 [2024-06-11 08:23:48.525980] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.052 [2024-06-11 08:23:48.525993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.052 qpair failed and we were unable to recover it. 00:31:18.052 [2024-06-11 08:23:48.535896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.052 [2024-06-11 08:23:48.535941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.052 [2024-06-11 08:23:48.535954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.052 [2024-06-11 08:23:48.535961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.052 [2024-06-11 08:23:48.535967] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.052 [2024-06-11 08:23:48.535980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.052 qpair failed and we were unable to recover it. 00:31:18.052 [2024-06-11 08:23:48.545941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.052 [2024-06-11 08:23:48.545994] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.052 [2024-06-11 08:23:48.546007] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.052 [2024-06-11 08:23:48.546013] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.052 [2024-06-11 08:23:48.546019] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.052 [2024-06-11 08:23:48.546032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.052 qpair failed and we were unable to recover it. 00:31:18.052 [2024-06-11 08:23:48.555981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.052 [2024-06-11 08:23:48.556022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.052 [2024-06-11 08:23:48.556035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.052 [2024-06-11 08:23:48.556042] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.052 [2024-06-11 08:23:48.556048] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.052 [2024-06-11 08:23:48.556061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.052 qpair failed and we were unable to recover it. 00:31:18.052 [2024-06-11 08:23:48.566006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.052 [2024-06-11 08:23:48.566047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.052 [2024-06-11 08:23:48.566060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.052 [2024-06-11 08:23:48.566067] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.052 [2024-06-11 08:23:48.566073] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.052 [2024-06-11 08:23:48.566086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.052 qpair failed and we were unable to recover it. 00:31:18.052 [2024-06-11 08:23:48.575906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.052 [2024-06-11 08:23:48.575958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.052 [2024-06-11 08:23:48.575972] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.052 [2024-06-11 08:23:48.575978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.052 [2024-06-11 08:23:48.575984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.052 [2024-06-11 08:23:48.575997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.052 qpair failed and we were unable to recover it. 00:31:18.052 [2024-06-11 08:23:48.586034] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.052 [2024-06-11 08:23:48.586082] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.052 [2024-06-11 08:23:48.586096] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.052 [2024-06-11 08:23:48.586102] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.052 [2024-06-11 08:23:48.586108] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.052 [2024-06-11 08:23:48.586121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.052 qpair failed and we were unable to recover it. 00:31:18.052 [2024-06-11 08:23:48.596084] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.052 [2024-06-11 08:23:48.596127] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.052 [2024-06-11 08:23:48.596140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.052 [2024-06-11 08:23:48.596150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.052 [2024-06-11 08:23:48.596157] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.052 [2024-06-11 08:23:48.596170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.052 qpair failed and we were unable to recover it. 00:31:18.052 [2024-06-11 08:23:48.606030] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.052 [2024-06-11 08:23:48.606087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.052 [2024-06-11 08:23:48.606100] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.052 [2024-06-11 08:23:48.606107] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.053 [2024-06-11 08:23:48.606113] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.053 [2024-06-11 08:23:48.606126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.053 qpair failed and we were unable to recover it. 00:31:18.053 [2024-06-11 08:23:48.616136] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.053 [2024-06-11 08:23:48.616216] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.053 [2024-06-11 08:23:48.616230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.053 [2024-06-11 08:23:48.616236] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.053 [2024-06-11 08:23:48.616242] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.053 [2024-06-11 08:23:48.616255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.053 qpair failed and we were unable to recover it. 00:31:18.053 [2024-06-11 08:23:48.626178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.053 [2024-06-11 08:23:48.626246] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.053 [2024-06-11 08:23:48.626259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.053 [2024-06-11 08:23:48.626266] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.053 [2024-06-11 08:23:48.626272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.053 [2024-06-11 08:23:48.626285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.053 qpair failed and we were unable to recover it. 00:31:18.053 [2024-06-11 08:23:48.636245] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.053 [2024-06-11 08:23:48.636327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.053 [2024-06-11 08:23:48.636341] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.053 [2024-06-11 08:23:48.636347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.053 [2024-06-11 08:23:48.636353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.053 [2024-06-11 08:23:48.636366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.053 qpair failed and we were unable to recover it. 00:31:18.053 [2024-06-11 08:23:48.646256] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.053 [2024-06-11 08:23:48.646335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.053 [2024-06-11 08:23:48.646348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.053 [2024-06-11 08:23:48.646355] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.053 [2024-06-11 08:23:48.646360] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.053 [2024-06-11 08:23:48.646374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.053 qpair failed and we were unable to recover it. 00:31:18.053 [2024-06-11 08:23:48.656268] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.053 [2024-06-11 08:23:48.656313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.053 [2024-06-11 08:23:48.656327] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.053 [2024-06-11 08:23:48.656333] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.053 [2024-06-11 08:23:48.656339] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.053 [2024-06-11 08:23:48.656352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.053 qpair failed and we were unable to recover it. 00:31:18.053 [2024-06-11 08:23:48.666158] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.053 [2024-06-11 08:23:48.666205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.053 [2024-06-11 08:23:48.666219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.053 [2024-06-11 08:23:48.666225] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.053 [2024-06-11 08:23:48.666231] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.053 [2024-06-11 08:23:48.666244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.053 qpair failed and we were unable to recover it. 00:31:18.053 [2024-06-11 08:23:48.676313] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.053 [2024-06-11 08:23:48.676366] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.053 [2024-06-11 08:23:48.676380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.053 [2024-06-11 08:23:48.676386] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.053 [2024-06-11 08:23:48.676392] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.053 [2024-06-11 08:23:48.676405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.053 qpair failed and we were unable to recover it. 00:31:18.053 [2024-06-11 08:23:48.686337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.053 [2024-06-11 08:23:48.686380] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.053 [2024-06-11 08:23:48.686394] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.053 [2024-06-11 08:23:48.686407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.053 [2024-06-11 08:23:48.686413] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.053 [2024-06-11 08:23:48.686426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.053 qpair failed and we were unable to recover it. 00:31:18.315 [2024-06-11 08:23:48.696378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.315 [2024-06-11 08:23:48.696429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.315 [2024-06-11 08:23:48.696447] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.315 [2024-06-11 08:23:48.696454] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.315 [2024-06-11 08:23:48.696460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.315 [2024-06-11 08:23:48.696473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.315 qpair failed and we were unable to recover it. 00:31:18.315 [2024-06-11 08:23:48.706377] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.315 [2024-06-11 08:23:48.706422] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.315 [2024-06-11 08:23:48.706435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.315 [2024-06-11 08:23:48.706445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.315 [2024-06-11 08:23:48.706451] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.315 [2024-06-11 08:23:48.706465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.315 qpair failed and we were unable to recover it. 00:31:18.315 [2024-06-11 08:23:48.716434] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.315 [2024-06-11 08:23:48.716477] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.315 [2024-06-11 08:23:48.716491] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.315 [2024-06-11 08:23:48.716497] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.315 [2024-06-11 08:23:48.716503] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.315 [2024-06-11 08:23:48.716517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.315 qpair failed and we were unable to recover it. 00:31:18.315 [2024-06-11 08:23:48.726321] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.315 [2024-06-11 08:23:48.726377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.316 [2024-06-11 08:23:48.726390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.316 [2024-06-11 08:23:48.726397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.316 [2024-06-11 08:23:48.726403] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.316 [2024-06-11 08:23:48.726416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.316 qpair failed and we were unable to recover it. 00:31:18.316 [2024-06-11 08:23:48.736476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.316 [2024-06-11 08:23:48.736523] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.316 [2024-06-11 08:23:48.736537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.316 [2024-06-11 08:23:48.736543] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.316 [2024-06-11 08:23:48.736549] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.316 [2024-06-11 08:23:48.736563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.316 qpair failed and we were unable to recover it. 00:31:18.316 [2024-06-11 08:23:48.746473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.316 [2024-06-11 08:23:48.746516] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.316 [2024-06-11 08:23:48.746530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.316 [2024-06-11 08:23:48.746536] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.316 [2024-06-11 08:23:48.746542] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.316 [2024-06-11 08:23:48.746556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.316 qpair failed and we were unable to recover it. 00:31:18.316 [2024-06-11 08:23:48.756538] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.316 [2024-06-11 08:23:48.756591] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.316 [2024-06-11 08:23:48.756605] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.316 [2024-06-11 08:23:48.756611] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.316 [2024-06-11 08:23:48.756617] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.316 [2024-06-11 08:23:48.756631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.316 qpair failed and we were unable to recover it. 00:31:18.316 [2024-06-11 08:23:48.766434] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.316 [2024-06-11 08:23:48.766479] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.316 [2024-06-11 08:23:48.766492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.316 [2024-06-11 08:23:48.766499] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.316 [2024-06-11 08:23:48.766505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.316 [2024-06-11 08:23:48.766518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.316 qpair failed and we were unable to recover it. 00:31:18.316 [2024-06-11 08:23:48.776591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.316 [2024-06-11 08:23:48.776639] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.316 [2024-06-11 08:23:48.776656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.316 [2024-06-11 08:23:48.776662] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.316 [2024-06-11 08:23:48.776668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.316 [2024-06-11 08:23:48.776681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.316 qpair failed and we were unable to recover it. 00:31:18.316 [2024-06-11 08:23:48.786608] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.316 [2024-06-11 08:23:48.786646] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.316 [2024-06-11 08:23:48.786660] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.316 [2024-06-11 08:23:48.786667] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.316 [2024-06-11 08:23:48.786673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.316 [2024-06-11 08:23:48.786686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.316 qpair failed and we were unable to recover it. 00:31:18.316 [2024-06-11 08:23:48.796640] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.316 [2024-06-11 08:23:48.796685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.316 [2024-06-11 08:23:48.796698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.316 [2024-06-11 08:23:48.796705] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.316 [2024-06-11 08:23:48.796710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.316 [2024-06-11 08:23:48.796724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.316 qpair failed and we were unable to recover it. 00:31:18.316 [2024-06-11 08:23:48.806653] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.316 [2024-06-11 08:23:48.806699] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.316 [2024-06-11 08:23:48.806712] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.316 [2024-06-11 08:23:48.806719] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.316 [2024-06-11 08:23:48.806725] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.316 [2024-06-11 08:23:48.806738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.316 qpair failed and we were unable to recover it. 00:31:18.316 [2024-06-11 08:23:48.816691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.316 [2024-06-11 08:23:48.816744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.316 [2024-06-11 08:23:48.816757] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.316 [2024-06-11 08:23:48.816764] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.316 [2024-06-11 08:23:48.816770] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.316 [2024-06-11 08:23:48.816786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.316 qpair failed and we were unable to recover it. 00:31:18.316 [2024-06-11 08:23:48.826591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.316 [2024-06-11 08:23:48.826636] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.316 [2024-06-11 08:23:48.826650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.316 [2024-06-11 08:23:48.826656] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.316 [2024-06-11 08:23:48.826662] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.316 [2024-06-11 08:23:48.826675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.316 qpair failed and we were unable to recover it. 00:31:18.316 [2024-06-11 08:23:48.836758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.316 [2024-06-11 08:23:48.836802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.316 [2024-06-11 08:23:48.836815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.316 [2024-06-11 08:23:48.836821] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.316 [2024-06-11 08:23:48.836827] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.316 [2024-06-11 08:23:48.836840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.316 qpair failed and we were unable to recover it. 00:31:18.316 [2024-06-11 08:23:48.846803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.316 [2024-06-11 08:23:48.846848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.316 [2024-06-11 08:23:48.846861] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.316 [2024-06-11 08:23:48.846867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.316 [2024-06-11 08:23:48.846873] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.316 [2024-06-11 08:23:48.846886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.316 qpair failed and we were unable to recover it. 00:31:18.316 [2024-06-11 08:23:48.856782] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.316 [2024-06-11 08:23:48.856833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.316 [2024-06-11 08:23:48.856846] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.316 [2024-06-11 08:23:48.856853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.316 [2024-06-11 08:23:48.856858] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.316 [2024-06-11 08:23:48.856872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.316 qpair failed and we were unable to recover it. 00:31:18.316 [2024-06-11 08:23:48.866838] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.316 [2024-06-11 08:23:48.866885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.316 [2024-06-11 08:23:48.866901] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.316 [2024-06-11 08:23:48.866908] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.316 [2024-06-11 08:23:48.866914] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.316 [2024-06-11 08:23:48.866927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.316 qpair failed and we were unable to recover it. 00:31:18.316 [2024-06-11 08:23:48.876842] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.316 [2024-06-11 08:23:48.876891] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.316 [2024-06-11 08:23:48.876905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.316 [2024-06-11 08:23:48.876911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.316 [2024-06-11 08:23:48.876917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.316 [2024-06-11 08:23:48.876930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.316 qpair failed and we were unable to recover it. 00:31:18.316 [2024-06-11 08:23:48.886873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.316 [2024-06-11 08:23:48.886919] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.316 [2024-06-11 08:23:48.886932] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.316 [2024-06-11 08:23:48.886938] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.316 [2024-06-11 08:23:48.886944] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.316 [2024-06-11 08:23:48.886957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.316 qpair failed and we were unable to recover it. 00:31:18.316 [2024-06-11 08:23:48.896915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.316 [2024-06-11 08:23:48.896965] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.316 [2024-06-11 08:23:48.896978] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.316 [2024-06-11 08:23:48.896985] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.316 [2024-06-11 08:23:48.896991] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.316 [2024-06-11 08:23:48.897004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.316 qpair failed and we were unable to recover it. 00:31:18.316 [2024-06-11 08:23:48.906935] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.316 [2024-06-11 08:23:48.906976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.317 [2024-06-11 08:23:48.906989] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.317 [2024-06-11 08:23:48.906996] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.317 [2024-06-11 08:23:48.907005] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.317 [2024-06-11 08:23:48.907019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.317 qpair failed and we were unable to recover it. 00:31:18.317 [2024-06-11 08:23:48.916853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.317 [2024-06-11 08:23:48.916906] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.317 [2024-06-11 08:23:48.916919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.317 [2024-06-11 08:23:48.916926] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.317 [2024-06-11 08:23:48.916932] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.317 [2024-06-11 08:23:48.916945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.317 qpair failed and we were unable to recover it. 00:31:18.317 [2024-06-11 08:23:48.926997] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.317 [2024-06-11 08:23:48.927044] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.317 [2024-06-11 08:23:48.927057] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.317 [2024-06-11 08:23:48.927063] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.317 [2024-06-11 08:23:48.927069] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.317 [2024-06-11 08:23:48.927082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.317 qpair failed and we were unable to recover it. 00:31:18.317 [2024-06-11 08:23:48.937031] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.317 [2024-06-11 08:23:48.937081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.317 [2024-06-11 08:23:48.937094] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.317 [2024-06-11 08:23:48.937101] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.317 [2024-06-11 08:23:48.937106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.317 [2024-06-11 08:23:48.937120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.317 qpair failed and we were unable to recover it. 00:31:18.317 [2024-06-11 08:23:48.947099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.317 [2024-06-11 08:23:48.947160] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.317 [2024-06-11 08:23:48.947173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.317 [2024-06-11 08:23:48.947180] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.317 [2024-06-11 08:23:48.947185] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.317 [2024-06-11 08:23:48.947198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.317 qpair failed and we were unable to recover it. 00:31:18.317 [2024-06-11 08:23:48.957045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.317 [2024-06-11 08:23:48.957093] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.317 [2024-06-11 08:23:48.957106] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.317 [2024-06-11 08:23:48.957113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.317 [2024-06-11 08:23:48.957119] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.317 [2024-06-11 08:23:48.957132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.317 qpair failed and we were unable to recover it. 00:31:18.580 [2024-06-11 08:23:48.967114] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.580 [2024-06-11 08:23:48.967157] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.580 [2024-06-11 08:23:48.967170] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.580 [2024-06-11 08:23:48.967177] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.580 [2024-06-11 08:23:48.967183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.580 [2024-06-11 08:23:48.967196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.580 qpair failed and we were unable to recover it. 00:31:18.580 [2024-06-11 08:23:48.977134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.580 [2024-06-11 08:23:48.977187] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.580 [2024-06-11 08:23:48.977211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.580 [2024-06-11 08:23:48.977219] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.580 [2024-06-11 08:23:48.977225] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.580 [2024-06-11 08:23:48.977243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.580 qpair failed and we were unable to recover it. 00:31:18.580 [2024-06-11 08:23:48.987152] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.580 [2024-06-11 08:23:48.987200] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.580 [2024-06-11 08:23:48.987216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.580 [2024-06-11 08:23:48.987222] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.580 [2024-06-11 08:23:48.987229] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.580 [2024-06-11 08:23:48.987243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.580 qpair failed and we were unable to recover it. 00:31:18.580 [2024-06-11 08:23:48.997147] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.580 [2024-06-11 08:23:48.997194] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.580 [2024-06-11 08:23:48.997208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.580 [2024-06-11 08:23:48.997215] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.580 [2024-06-11 08:23:48.997225] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.580 [2024-06-11 08:23:48.997239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.580 qpair failed and we were unable to recover it. 00:31:18.580 [2024-06-11 08:23:49.007081] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.580 [2024-06-11 08:23:49.007125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.580 [2024-06-11 08:23:49.007139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.580 [2024-06-11 08:23:49.007146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.580 [2024-06-11 08:23:49.007152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.580 [2024-06-11 08:23:49.007166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.580 qpair failed and we were unable to recover it. 00:31:18.580 [2024-06-11 08:23:49.017257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.580 [2024-06-11 08:23:49.017353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.580 [2024-06-11 08:23:49.017368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.580 [2024-06-11 08:23:49.017374] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.580 [2024-06-11 08:23:49.017380] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.580 [2024-06-11 08:23:49.017394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.580 qpair failed and we were unable to recover it. 00:31:18.580 [2024-06-11 08:23:49.027266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.580 [2024-06-11 08:23:49.027359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.580 [2024-06-11 08:23:49.027373] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.580 [2024-06-11 08:23:49.027379] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.580 [2024-06-11 08:23:49.027386] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.580 [2024-06-11 08:23:49.027399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.580 qpair failed and we were unable to recover it. 00:31:18.580 [2024-06-11 08:23:49.037281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.580 [2024-06-11 08:23:49.037328] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.580 [2024-06-11 08:23:49.037342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.580 [2024-06-11 08:23:49.037349] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.580 [2024-06-11 08:23:49.037355] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.580 [2024-06-11 08:23:49.037369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.580 qpair failed and we were unable to recover it. 00:31:18.580 [2024-06-11 08:23:49.047309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.580 [2024-06-11 08:23:49.047355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.580 [2024-06-11 08:23:49.047369] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.580 [2024-06-11 08:23:49.047375] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.580 [2024-06-11 08:23:49.047381] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.580 [2024-06-11 08:23:49.047394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.580 qpair failed and we were unable to recover it. 00:31:18.580 [2024-06-11 08:23:49.057198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.580 [2024-06-11 08:23:49.057245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.580 [2024-06-11 08:23:49.057258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.580 [2024-06-11 08:23:49.057265] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.580 [2024-06-11 08:23:49.057271] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.580 [2024-06-11 08:23:49.057285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.581 qpair failed and we were unable to recover it. 00:31:18.581 [2024-06-11 08:23:49.067356] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.581 [2024-06-11 08:23:49.067404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.581 [2024-06-11 08:23:49.067417] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.581 [2024-06-11 08:23:49.067424] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.581 [2024-06-11 08:23:49.067430] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.581 [2024-06-11 08:23:49.067449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.581 qpair failed and we were unable to recover it. 00:31:18.581 [2024-06-11 08:23:49.077363] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.581 [2024-06-11 08:23:49.077409] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.581 [2024-06-11 08:23:49.077423] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.581 [2024-06-11 08:23:49.077429] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.581 [2024-06-11 08:23:49.077435] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.581 [2024-06-11 08:23:49.077454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.581 qpair failed and we were unable to recover it. 00:31:18.581 [2024-06-11 08:23:49.087301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.581 [2024-06-11 08:23:49.087348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.581 [2024-06-11 08:23:49.087362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.581 [2024-06-11 08:23:49.087372] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.581 [2024-06-11 08:23:49.087378] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.581 [2024-06-11 08:23:49.087397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.581 qpair failed and we were unable to recover it. 00:31:18.581 [2024-06-11 08:23:49.097459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.581 [2024-06-11 08:23:49.097509] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.581 [2024-06-11 08:23:49.097523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.581 [2024-06-11 08:23:49.097530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.581 [2024-06-11 08:23:49.097536] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.581 [2024-06-11 08:23:49.097551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.581 qpair failed and we were unable to recover it. 00:31:18.581 [2024-06-11 08:23:49.107481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.581 [2024-06-11 08:23:49.107527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.581 [2024-06-11 08:23:49.107541] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.581 [2024-06-11 08:23:49.107547] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.581 [2024-06-11 08:23:49.107554] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.581 [2024-06-11 08:23:49.107567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.581 qpair failed and we were unable to recover it. 00:31:18.581 [2024-06-11 08:23:49.117487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.581 [2024-06-11 08:23:49.117530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.581 [2024-06-11 08:23:49.117543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.581 [2024-06-11 08:23:49.117550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.581 [2024-06-11 08:23:49.117556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.581 [2024-06-11 08:23:49.117569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.581 qpair failed and we were unable to recover it. 00:31:18.581 [2024-06-11 08:23:49.127427] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.581 [2024-06-11 08:23:49.127479] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.581 [2024-06-11 08:23:49.127493] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.581 [2024-06-11 08:23:49.127499] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.581 [2024-06-11 08:23:49.127505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.581 [2024-06-11 08:23:49.127519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.581 qpair failed and we were unable to recover it. 00:31:18.581 [2024-06-11 08:23:49.137534] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.581 [2024-06-11 08:23:49.137584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.581 [2024-06-11 08:23:49.137597] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.581 [2024-06-11 08:23:49.137604] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.581 [2024-06-11 08:23:49.137610] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.581 [2024-06-11 08:23:49.137625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.581 qpair failed and we were unable to recover it. 00:31:18.581 [2024-06-11 08:23:49.147585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.581 [2024-06-11 08:23:49.147628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.581 [2024-06-11 08:23:49.147641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.581 [2024-06-11 08:23:49.147648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.581 [2024-06-11 08:23:49.147654] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.581 [2024-06-11 08:23:49.147667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.581 qpair failed and we were unable to recover it. 00:31:18.581 [2024-06-11 08:23:49.157608] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.581 [2024-06-11 08:23:49.157672] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.581 [2024-06-11 08:23:49.157685] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.581 [2024-06-11 08:23:49.157692] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.581 [2024-06-11 08:23:49.157698] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.581 [2024-06-11 08:23:49.157712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.581 qpair failed and we were unable to recover it. 00:31:18.581 [2024-06-11 08:23:49.167517] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.581 [2024-06-11 08:23:49.167577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.581 [2024-06-11 08:23:49.167590] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.581 [2024-06-11 08:23:49.167597] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.581 [2024-06-11 08:23:49.167603] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.581 [2024-06-11 08:23:49.167616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.581 qpair failed and we were unable to recover it. 00:31:18.581 [2024-06-11 08:23:49.177656] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.581 [2024-06-11 08:23:49.177703] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.581 [2024-06-11 08:23:49.177718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.581 [2024-06-11 08:23:49.177728] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.581 [2024-06-11 08:23:49.177734] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.581 [2024-06-11 08:23:49.177747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.581 qpair failed and we were unable to recover it. 00:31:18.581 [2024-06-11 08:23:49.187706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.581 [2024-06-11 08:23:49.187752] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.581 [2024-06-11 08:23:49.187766] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.581 [2024-06-11 08:23:49.187773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.581 [2024-06-11 08:23:49.187779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.581 [2024-06-11 08:23:49.187792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.581 qpair failed and we were unable to recover it. 00:31:18.581 [2024-06-11 08:23:49.197691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.581 [2024-06-11 08:23:49.197752] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.581 [2024-06-11 08:23:49.197765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.582 [2024-06-11 08:23:49.197772] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.582 [2024-06-11 08:23:49.197778] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.582 [2024-06-11 08:23:49.197792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.582 qpair failed and we were unable to recover it. 00:31:18.582 [2024-06-11 08:23:49.207752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.582 [2024-06-11 08:23:49.207799] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.582 [2024-06-11 08:23:49.207812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.582 [2024-06-11 08:23:49.207819] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.582 [2024-06-11 08:23:49.207825] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.582 [2024-06-11 08:23:49.207838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.582 qpair failed and we were unable to recover it. 00:31:18.582 [2024-06-11 08:23:49.217762] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.582 [2024-06-11 08:23:49.217849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.582 [2024-06-11 08:23:49.217862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.582 [2024-06-11 08:23:49.217869] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.582 [2024-06-11 08:23:49.217875] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.582 [2024-06-11 08:23:49.217889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.582 qpair failed and we were unable to recover it. 00:31:18.861 [2024-06-11 08:23:49.227783] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.861 [2024-06-11 08:23:49.227830] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.861 [2024-06-11 08:23:49.227844] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.861 [2024-06-11 08:23:49.227851] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.861 [2024-06-11 08:23:49.227857] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.861 [2024-06-11 08:23:49.227871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.861 qpair failed and we were unable to recover it. 00:31:18.861 [2024-06-11 08:23:49.237806] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.861 [2024-06-11 08:23:49.237849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.861 [2024-06-11 08:23:49.237863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.861 [2024-06-11 08:23:49.237870] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.861 [2024-06-11 08:23:49.237876] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.861 [2024-06-11 08:23:49.237890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.861 qpair failed and we were unable to recover it. 00:31:18.861 [2024-06-11 08:23:49.247869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.861 [2024-06-11 08:23:49.247960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.862 [2024-06-11 08:23:49.247973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.862 [2024-06-11 08:23:49.247980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.862 [2024-06-11 08:23:49.247986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.862 [2024-06-11 08:23:49.248000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.862 qpair failed and we were unable to recover it. 00:31:18.862 [2024-06-11 08:23:49.257895] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.862 [2024-06-11 08:23:49.257941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.862 [2024-06-11 08:23:49.257955] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.862 [2024-06-11 08:23:49.257961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.862 [2024-06-11 08:23:49.257967] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.862 [2024-06-11 08:23:49.257981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.862 qpair failed and we were unable to recover it. 00:31:18.862 [2024-06-11 08:23:49.267925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.862 [2024-06-11 08:23:49.267971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.862 [2024-06-11 08:23:49.267988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.862 [2024-06-11 08:23:49.267995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.862 [2024-06-11 08:23:49.268001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.862 [2024-06-11 08:23:49.268014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.862 qpair failed and we were unable to recover it. 00:31:18.862 [2024-06-11 08:23:49.277922] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.862 [2024-06-11 08:23:49.277962] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.862 [2024-06-11 08:23:49.277975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.862 [2024-06-11 08:23:49.277982] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.862 [2024-06-11 08:23:49.277988] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.862 [2024-06-11 08:23:49.278002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.862 qpair failed and we were unable to recover it. 00:31:18.862 [2024-06-11 08:23:49.287850] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.862 [2024-06-11 08:23:49.287902] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.862 [2024-06-11 08:23:49.287916] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.862 [2024-06-11 08:23:49.287923] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.862 [2024-06-11 08:23:49.287929] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.862 [2024-06-11 08:23:49.287943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.862 qpair failed and we were unable to recover it. 00:31:18.862 [2024-06-11 08:23:49.298003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.862 [2024-06-11 08:23:49.298095] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.862 [2024-06-11 08:23:49.298109] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.862 [2024-06-11 08:23:49.298115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.862 [2024-06-11 08:23:49.298121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.862 [2024-06-11 08:23:49.298135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.862 qpair failed and we were unable to recover it. 00:31:18.862 [2024-06-11 08:23:49.308045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.862 [2024-06-11 08:23:49.308092] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.862 [2024-06-11 08:23:49.308105] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.862 [2024-06-11 08:23:49.308112] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.862 [2024-06-11 08:23:49.308118] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.862 [2024-06-11 08:23:49.308135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.862 qpair failed and we were unable to recover it. 00:31:18.862 [2024-06-11 08:23:49.318025] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.862 [2024-06-11 08:23:49.318070] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.862 [2024-06-11 08:23:49.318087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.862 [2024-06-11 08:23:49.318095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.862 [2024-06-11 08:23:49.318102] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.862 [2024-06-11 08:23:49.318115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.862 qpair failed and we were unable to recover it. 00:31:18.862 [2024-06-11 08:23:49.328094] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.862 [2024-06-11 08:23:49.328139] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.862 [2024-06-11 08:23:49.328152] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.862 [2024-06-11 08:23:49.328159] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.862 [2024-06-11 08:23:49.328165] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.862 [2024-06-11 08:23:49.328179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.862 qpair failed and we were unable to recover it. 00:31:18.862 [2024-06-11 08:23:49.337983] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.862 [2024-06-11 08:23:49.338031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.862 [2024-06-11 08:23:49.338045] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.862 [2024-06-11 08:23:49.338051] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.862 [2024-06-11 08:23:49.338058] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.862 [2024-06-11 08:23:49.338071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.862 qpair failed and we were unable to recover it. 00:31:18.862 [2024-06-11 08:23:49.348180] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.862 [2024-06-11 08:23:49.348226] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.862 [2024-06-11 08:23:49.348240] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.862 [2024-06-11 08:23:49.348246] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.862 [2024-06-11 08:23:49.348252] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.862 [2024-06-11 08:23:49.348266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.862 qpair failed and we were unable to recover it. 00:31:18.862 [2024-06-11 08:23:49.358167] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.862 [2024-06-11 08:23:49.358219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.862 [2024-06-11 08:23:49.358247] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.862 [2024-06-11 08:23:49.358256] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.862 [2024-06-11 08:23:49.358262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.862 [2024-06-11 08:23:49.358280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.862 qpair failed and we were unable to recover it. 00:31:18.862 [2024-06-11 08:23:49.368199] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.862 [2024-06-11 08:23:49.368247] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.862 [2024-06-11 08:23:49.368263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.862 [2024-06-11 08:23:49.368269] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.862 [2024-06-11 08:23:49.368275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.862 [2024-06-11 08:23:49.368290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.862 qpair failed and we were unable to recover it. 00:31:18.862 [2024-06-11 08:23:49.378213] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.862 [2024-06-11 08:23:49.378264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.862 [2024-06-11 08:23:49.378278] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.862 [2024-06-11 08:23:49.378285] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.862 [2024-06-11 08:23:49.378291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.863 [2024-06-11 08:23:49.378304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.863 qpair failed and we were unable to recover it. 00:31:18.863 [2024-06-11 08:23:49.388236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.863 [2024-06-11 08:23:49.388285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.863 [2024-06-11 08:23:49.388299] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.863 [2024-06-11 08:23:49.388305] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.863 [2024-06-11 08:23:49.388311] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.863 [2024-06-11 08:23:49.388325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.863 qpair failed and we were unable to recover it. 00:31:18.863 [2024-06-11 08:23:49.398325] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.863 [2024-06-11 08:23:49.398368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.863 [2024-06-11 08:23:49.398381] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.863 [2024-06-11 08:23:49.398388] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.863 [2024-06-11 08:23:49.398394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.863 [2024-06-11 08:23:49.398414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.863 qpair failed and we were unable to recover it. 00:31:18.863 [2024-06-11 08:23:49.408317] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.863 [2024-06-11 08:23:49.408361] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.863 [2024-06-11 08:23:49.408374] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.863 [2024-06-11 08:23:49.408381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.863 [2024-06-11 08:23:49.408387] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.863 [2024-06-11 08:23:49.408401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.863 qpair failed and we were unable to recover it. 00:31:18.863 [2024-06-11 08:23:49.418326] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.863 [2024-06-11 08:23:49.418383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.863 [2024-06-11 08:23:49.418397] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.863 [2024-06-11 08:23:49.418403] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.863 [2024-06-11 08:23:49.418409] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.863 [2024-06-11 08:23:49.418422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.863 qpair failed and we were unable to recover it. 00:31:18.863 [2024-06-11 08:23:49.428357] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.863 [2024-06-11 08:23:49.428399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.863 [2024-06-11 08:23:49.428412] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.863 [2024-06-11 08:23:49.428419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.863 [2024-06-11 08:23:49.428425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.863 [2024-06-11 08:23:49.428451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.863 qpair failed and we were unable to recover it. 00:31:18.863 [2024-06-11 08:23:49.438378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.863 [2024-06-11 08:23:49.438426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.863 [2024-06-11 08:23:49.438444] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.863 [2024-06-11 08:23:49.438451] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.863 [2024-06-11 08:23:49.438457] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.863 [2024-06-11 08:23:49.438471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.863 qpair failed and we were unable to recover it. 00:31:18.863 [2024-06-11 08:23:49.448399] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.863 [2024-06-11 08:23:49.448451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.863 [2024-06-11 08:23:49.448465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.863 [2024-06-11 08:23:49.448472] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.863 [2024-06-11 08:23:49.448478] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.863 [2024-06-11 08:23:49.448491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.863 qpair failed and we were unable to recover it. 00:31:18.863 [2024-06-11 08:23:49.458431] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.863 [2024-06-11 08:23:49.458485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.863 [2024-06-11 08:23:49.458499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.863 [2024-06-11 08:23:49.458505] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.863 [2024-06-11 08:23:49.458511] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.863 [2024-06-11 08:23:49.458525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.863 qpair failed and we were unable to recover it. 00:31:18.863 [2024-06-11 08:23:49.468430] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.863 [2024-06-11 08:23:49.468479] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.863 [2024-06-11 08:23:49.468492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.863 [2024-06-11 08:23:49.468499] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.863 [2024-06-11 08:23:49.468505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.863 [2024-06-11 08:23:49.468519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.863 qpair failed and we were unable to recover it. 00:31:18.863 [2024-06-11 08:23:49.478500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.863 [2024-06-11 08:23:49.478547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.863 [2024-06-11 08:23:49.478560] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.863 [2024-06-11 08:23:49.478567] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.863 [2024-06-11 08:23:49.478573] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.863 [2024-06-11 08:23:49.478587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.863 qpair failed and we were unable to recover it. 00:31:18.863 [2024-06-11 08:23:49.488584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:18.863 [2024-06-11 08:23:49.488644] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:18.863 [2024-06-11 08:23:49.488657] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:18.863 [2024-06-11 08:23:49.488664] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:18.863 [2024-06-11 08:23:49.488673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:18.863 [2024-06-11 08:23:49.488687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:18.863 qpair failed and we were unable to recover it. 00:31:19.133 [2024-06-11 08:23:49.498528] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.133 [2024-06-11 08:23:49.498575] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.133 [2024-06-11 08:23:49.498588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.133 [2024-06-11 08:23:49.498594] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.133 [2024-06-11 08:23:49.498600] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.133 [2024-06-11 08:23:49.498614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.133 qpair failed and we were unable to recover it. 00:31:19.133 [2024-06-11 08:23:49.508577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.133 [2024-06-11 08:23:49.508622] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.133 [2024-06-11 08:23:49.508635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.133 [2024-06-11 08:23:49.508642] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.133 [2024-06-11 08:23:49.508648] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.133 [2024-06-11 08:23:49.508661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.133 qpair failed and we were unable to recover it. 00:31:19.133 [2024-06-11 08:23:49.518572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.133 [2024-06-11 08:23:49.518637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.133 [2024-06-11 08:23:49.518651] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.133 [2024-06-11 08:23:49.518657] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.133 [2024-06-11 08:23:49.518663] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.133 [2024-06-11 08:23:49.518677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.133 qpair failed and we were unable to recover it. 00:31:19.133 [2024-06-11 08:23:49.528632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.133 [2024-06-11 08:23:49.528673] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.133 [2024-06-11 08:23:49.528687] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.133 [2024-06-11 08:23:49.528694] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.133 [2024-06-11 08:23:49.528700] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.133 [2024-06-11 08:23:49.528713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.133 qpair failed and we were unable to recover it. 00:31:19.133 [2024-06-11 08:23:49.538539] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.133 [2024-06-11 08:23:49.538648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.133 [2024-06-11 08:23:49.538662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.133 [2024-06-11 08:23:49.538668] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.133 [2024-06-11 08:23:49.538674] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.133 [2024-06-11 08:23:49.538688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.133 qpair failed and we were unable to recover it. 00:31:19.133 [2024-06-11 08:23:49.548721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.133 [2024-06-11 08:23:49.548766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.133 [2024-06-11 08:23:49.548780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.133 [2024-06-11 08:23:49.548786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.133 [2024-06-11 08:23:49.548792] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.133 [2024-06-11 08:23:49.548806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.133 qpair failed and we were unable to recover it. 00:31:19.133 [2024-06-11 08:23:49.558701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.133 [2024-06-11 08:23:49.558751] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.133 [2024-06-11 08:23:49.558765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.133 [2024-06-11 08:23:49.558771] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.133 [2024-06-11 08:23:49.558777] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.133 [2024-06-11 08:23:49.558791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.133 qpair failed and we were unable to recover it. 00:31:19.133 [2024-06-11 08:23:49.568633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.133 [2024-06-11 08:23:49.568697] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.133 [2024-06-11 08:23:49.568710] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.133 [2024-06-11 08:23:49.568717] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.133 [2024-06-11 08:23:49.568723] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.133 [2024-06-11 08:23:49.568736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.133 qpair failed and we were unable to recover it. 00:31:19.133 [2024-06-11 08:23:49.578762] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.133 [2024-06-11 08:23:49.578818] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.133 [2024-06-11 08:23:49.578831] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.134 [2024-06-11 08:23:49.578842] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.134 [2024-06-11 08:23:49.578848] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.134 [2024-06-11 08:23:49.578861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.134 qpair failed and we were unable to recover it. 00:31:19.134 [2024-06-11 08:23:49.588779] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.134 [2024-06-11 08:23:49.588825] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.134 [2024-06-11 08:23:49.588838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.134 [2024-06-11 08:23:49.588845] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.134 [2024-06-11 08:23:49.588850] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.134 [2024-06-11 08:23:49.588864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.134 qpair failed and we were unable to recover it. 00:31:19.134 [2024-06-11 08:23:49.598812] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.134 [2024-06-11 08:23:49.598860] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.134 [2024-06-11 08:23:49.598873] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.134 [2024-06-11 08:23:49.598880] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.134 [2024-06-11 08:23:49.598886] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.134 [2024-06-11 08:23:49.598899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.134 qpair failed and we were unable to recover it. 00:31:19.134 [2024-06-11 08:23:49.608866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.134 [2024-06-11 08:23:49.608915] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.134 [2024-06-11 08:23:49.608930] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.134 [2024-06-11 08:23:49.608936] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.134 [2024-06-11 08:23:49.608942] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.134 [2024-06-11 08:23:49.608956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.134 qpair failed and we were unable to recover it. 00:31:19.134 [2024-06-11 08:23:49.618877] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.134 [2024-06-11 08:23:49.618921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.134 [2024-06-11 08:23:49.618935] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.134 [2024-06-11 08:23:49.618941] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.134 [2024-06-11 08:23:49.618947] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.134 [2024-06-11 08:23:49.618960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.134 qpair failed and we were unable to recover it. 00:31:19.134 [2024-06-11 08:23:49.628904] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.134 [2024-06-11 08:23:49.628954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.134 [2024-06-11 08:23:49.628968] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.134 [2024-06-11 08:23:49.628974] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.134 [2024-06-11 08:23:49.628980] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.134 [2024-06-11 08:23:49.628994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.134 qpair failed and we were unable to recover it. 00:31:19.134 [2024-06-11 08:23:49.638963] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.134 [2024-06-11 08:23:49.639058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.134 [2024-06-11 08:23:49.639071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.134 [2024-06-11 08:23:49.639078] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.134 [2024-06-11 08:23:49.639084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.134 [2024-06-11 08:23:49.639098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.134 qpair failed and we were unable to recover it. 00:31:19.134 [2024-06-11 08:23:49.649030] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.134 [2024-06-11 08:23:49.649078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.134 [2024-06-11 08:23:49.649091] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.134 [2024-06-11 08:23:49.649097] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.134 [2024-06-11 08:23:49.649103] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.134 [2024-06-11 08:23:49.649116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.134 qpair failed and we were unable to recover it. 00:31:19.134 [2024-06-11 08:23:49.659018] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.134 [2024-06-11 08:23:49.659069] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.134 [2024-06-11 08:23:49.659082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.134 [2024-06-11 08:23:49.659089] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.134 [2024-06-11 08:23:49.659095] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.134 [2024-06-11 08:23:49.659108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.134 qpair failed and we were unable to recover it. 00:31:19.134 [2024-06-11 08:23:49.668893] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.134 [2024-06-11 08:23:49.668938] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.134 [2024-06-11 08:23:49.668951] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.134 [2024-06-11 08:23:49.668961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.134 [2024-06-11 08:23:49.668966] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.134 [2024-06-11 08:23:49.668980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.134 qpair failed and we were unable to recover it. 00:31:19.134 [2024-06-11 08:23:49.678925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.134 [2024-06-11 08:23:49.678969] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.134 [2024-06-11 08:23:49.678983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.134 [2024-06-11 08:23:49.678989] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.134 [2024-06-11 08:23:49.678995] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.134 [2024-06-11 08:23:49.679009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.134 qpair failed and we were unable to recover it. 00:31:19.134 [2024-06-11 08:23:49.689087] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.134 [2024-06-11 08:23:49.689143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.134 [2024-06-11 08:23:49.689157] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.134 [2024-06-11 08:23:49.689164] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.134 [2024-06-11 08:23:49.689170] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.134 [2024-06-11 08:23:49.689185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.134 qpair failed and we were unable to recover it. 00:31:19.134 [2024-06-11 08:23:49.699109] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.134 [2024-06-11 08:23:49.699160] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.134 [2024-06-11 08:23:49.699174] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.134 [2024-06-11 08:23:49.699181] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.134 [2024-06-11 08:23:49.699187] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.134 [2024-06-11 08:23:49.699200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.134 qpair failed and we were unable to recover it. 00:31:19.134 [2024-06-11 08:23:49.709149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.134 [2024-06-11 08:23:49.709198] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.134 [2024-06-11 08:23:49.709211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.134 [2024-06-11 08:23:49.709218] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.134 [2024-06-11 08:23:49.709224] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.134 [2024-06-11 08:23:49.709237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.135 qpair failed and we were unable to recover it. 00:31:19.135 [2024-06-11 08:23:49.719045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.135 [2024-06-11 08:23:49.719088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.135 [2024-06-11 08:23:49.719102] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.135 [2024-06-11 08:23:49.719109] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.135 [2024-06-11 08:23:49.719114] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.135 [2024-06-11 08:23:49.719128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.135 qpair failed and we were unable to recover it. 00:31:19.135 [2024-06-11 08:23:49.729200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.135 [2024-06-11 08:23:49.729246] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.135 [2024-06-11 08:23:49.729260] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.135 [2024-06-11 08:23:49.729266] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.135 [2024-06-11 08:23:49.729272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.135 [2024-06-11 08:23:49.729285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.135 qpair failed and we were unable to recover it. 00:31:19.135 [2024-06-11 08:23:49.739206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.135 [2024-06-11 08:23:49.739261] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.135 [2024-06-11 08:23:49.739284] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.135 [2024-06-11 08:23:49.739292] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.135 [2024-06-11 08:23:49.739299] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.135 [2024-06-11 08:23:49.739317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.135 qpair failed and we were unable to recover it. 00:31:19.135 [2024-06-11 08:23:49.749133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.135 [2024-06-11 08:23:49.749180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.135 [2024-06-11 08:23:49.749195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.135 [2024-06-11 08:23:49.749202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.135 [2024-06-11 08:23:49.749208] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.135 [2024-06-11 08:23:49.749223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.135 qpair failed and we were unable to recover it. 00:31:19.135 [2024-06-11 08:23:49.759282] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.135 [2024-06-11 08:23:49.759365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.135 [2024-06-11 08:23:49.759383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.135 [2024-06-11 08:23:49.759390] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.135 [2024-06-11 08:23:49.759397] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.135 [2024-06-11 08:23:49.759410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.135 qpair failed and we were unable to recover it. 00:31:19.135 [2024-06-11 08:23:49.769304] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.135 [2024-06-11 08:23:49.769346] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.135 [2024-06-11 08:23:49.769359] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.135 [2024-06-11 08:23:49.769366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.135 [2024-06-11 08:23:49.769372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.135 [2024-06-11 08:23:49.769385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.135 qpair failed and we were unable to recover it. 00:31:19.398 [2024-06-11 08:23:49.779332] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.398 [2024-06-11 08:23:49.779379] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.398 [2024-06-11 08:23:49.779393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.398 [2024-06-11 08:23:49.779399] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.398 [2024-06-11 08:23:49.779406] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.398 [2024-06-11 08:23:49.779419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.398 qpair failed and we were unable to recover it. 00:31:19.398 [2024-06-11 08:23:49.789371] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.398 [2024-06-11 08:23:49.789413] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.398 [2024-06-11 08:23:49.789426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.398 [2024-06-11 08:23:49.789433] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.398 [2024-06-11 08:23:49.789443] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.398 [2024-06-11 08:23:49.789457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.398 qpair failed and we were unable to recover it. 00:31:19.398 [2024-06-11 08:23:49.799393] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.398 [2024-06-11 08:23:49.799443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.398 [2024-06-11 08:23:49.799456] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.398 [2024-06-11 08:23:49.799463] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.398 [2024-06-11 08:23:49.799469] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.398 [2024-06-11 08:23:49.799486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.398 qpair failed and we were unable to recover it. 00:31:19.398 [2024-06-11 08:23:49.809423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.398 [2024-06-11 08:23:49.809471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.398 [2024-06-11 08:23:49.809484] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.398 [2024-06-11 08:23:49.809491] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.398 [2024-06-11 08:23:49.809496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.398 [2024-06-11 08:23:49.809510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.398 qpair failed and we were unable to recover it. 00:31:19.398 [2024-06-11 08:23:49.819441] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.398 [2024-06-11 08:23:49.819493] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.398 [2024-06-11 08:23:49.819507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.398 [2024-06-11 08:23:49.819514] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.398 [2024-06-11 08:23:49.819520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.398 [2024-06-11 08:23:49.819533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.398 qpair failed and we were unable to recover it. 00:31:19.398 [2024-06-11 08:23:49.829474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.398 [2024-06-11 08:23:49.829523] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.398 [2024-06-11 08:23:49.829536] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.398 [2024-06-11 08:23:49.829543] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.398 [2024-06-11 08:23:49.829549] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.398 [2024-06-11 08:23:49.829562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.398 qpair failed and we were unable to recover it. 00:31:19.398 [2024-06-11 08:23:49.839494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.398 [2024-06-11 08:23:49.839549] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.398 [2024-06-11 08:23:49.839562] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.398 [2024-06-11 08:23:49.839569] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.398 [2024-06-11 08:23:49.839575] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.398 [2024-06-11 08:23:49.839588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.398 qpair failed and we were unable to recover it. 00:31:19.398 [2024-06-11 08:23:49.849524] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.398 [2024-06-11 08:23:49.849599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.398 [2024-06-11 08:23:49.849616] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.398 [2024-06-11 08:23:49.849623] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.398 [2024-06-11 08:23:49.849629] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.398 [2024-06-11 08:23:49.849642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.398 qpair failed and we were unable to recover it. 00:31:19.398 [2024-06-11 08:23:49.859528] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.398 [2024-06-11 08:23:49.859580] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.398 [2024-06-11 08:23:49.859594] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.398 [2024-06-11 08:23:49.859601] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.398 [2024-06-11 08:23:49.859607] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.398 [2024-06-11 08:23:49.859620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.398 qpair failed and we were unable to recover it. 00:31:19.398 [2024-06-11 08:23:49.869555] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.398 [2024-06-11 08:23:49.869602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.398 [2024-06-11 08:23:49.869615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.398 [2024-06-11 08:23:49.869622] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.398 [2024-06-11 08:23:49.869628] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.398 [2024-06-11 08:23:49.869641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.398 qpair failed and we were unable to recover it. 00:31:19.398 [2024-06-11 08:23:49.879608] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.398 [2024-06-11 08:23:49.879655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.398 [2024-06-11 08:23:49.879668] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.398 [2024-06-11 08:23:49.879675] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.398 [2024-06-11 08:23:49.879681] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.398 [2024-06-11 08:23:49.879695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.398 qpair failed and we were unable to recover it. 00:31:19.398 [2024-06-11 08:23:49.889665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.398 [2024-06-11 08:23:49.889710] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.399 [2024-06-11 08:23:49.889724] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.399 [2024-06-11 08:23:49.889730] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.399 [2024-06-11 08:23:49.889736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.399 [2024-06-11 08:23:49.889753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.399 qpair failed and we were unable to recover it. 00:31:19.399 [2024-06-11 08:23:49.899672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.399 [2024-06-11 08:23:49.899744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.399 [2024-06-11 08:23:49.899757] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.399 [2024-06-11 08:23:49.899764] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.399 [2024-06-11 08:23:49.899770] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.399 [2024-06-11 08:23:49.899783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.399 qpair failed and we were unable to recover it. 00:31:19.399 [2024-06-11 08:23:49.909711] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.399 [2024-06-11 08:23:49.909760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.399 [2024-06-11 08:23:49.909773] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.399 [2024-06-11 08:23:49.909779] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.399 [2024-06-11 08:23:49.909785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.399 [2024-06-11 08:23:49.909799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.399 qpair failed and we were unable to recover it. 00:31:19.399 [2024-06-11 08:23:49.919591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.399 [2024-06-11 08:23:49.919638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.399 [2024-06-11 08:23:49.919653] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.399 [2024-06-11 08:23:49.919660] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.399 [2024-06-11 08:23:49.919666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.399 [2024-06-11 08:23:49.919680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.399 qpair failed and we were unable to recover it. 00:31:19.399 [2024-06-11 08:23:49.929725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.399 [2024-06-11 08:23:49.929768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.399 [2024-06-11 08:23:49.929782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.399 [2024-06-11 08:23:49.929789] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.399 [2024-06-11 08:23:49.929795] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.399 [2024-06-11 08:23:49.929808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.399 qpair failed and we were unable to recover it. 00:31:19.399 [2024-06-11 08:23:49.939816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.399 [2024-06-11 08:23:49.939902] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.399 [2024-06-11 08:23:49.939919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.399 [2024-06-11 08:23:49.939926] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.399 [2024-06-11 08:23:49.939932] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.399 [2024-06-11 08:23:49.939947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.399 qpair failed and we were unable to recover it. 00:31:19.399 [2024-06-11 08:23:49.949801] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.399 [2024-06-11 08:23:49.949852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.399 [2024-06-11 08:23:49.949866] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.399 [2024-06-11 08:23:49.949872] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.399 [2024-06-11 08:23:49.949878] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.399 [2024-06-11 08:23:49.949892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.399 qpair failed and we were unable to recover it. 00:31:19.399 [2024-06-11 08:23:49.959825] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.399 [2024-06-11 08:23:49.959868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.399 [2024-06-11 08:23:49.959881] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.399 [2024-06-11 08:23:49.959888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.399 [2024-06-11 08:23:49.959894] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.399 [2024-06-11 08:23:49.959907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.399 qpair failed and we were unable to recover it. 00:31:19.399 [2024-06-11 08:23:49.969869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.399 [2024-06-11 08:23:49.969914] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.399 [2024-06-11 08:23:49.969927] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.399 [2024-06-11 08:23:49.969933] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.399 [2024-06-11 08:23:49.969939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.399 [2024-06-11 08:23:49.969952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.399 qpair failed and we were unable to recover it. 00:31:19.399 [2024-06-11 08:23:49.979881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.399 [2024-06-11 08:23:49.979978] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.399 [2024-06-11 08:23:49.979991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.399 [2024-06-11 08:23:49.979998] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.399 [2024-06-11 08:23:49.980007] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.399 [2024-06-11 08:23:49.980020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.399 qpair failed and we were unable to recover it. 00:31:19.399 [2024-06-11 08:23:49.989925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.399 [2024-06-11 08:23:49.989964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.399 [2024-06-11 08:23:49.989977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.399 [2024-06-11 08:23:49.989984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.399 [2024-06-11 08:23:49.989990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.399 [2024-06-11 08:23:49.990003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.399 qpair failed and we were unable to recover it. 00:31:19.399 [2024-06-11 08:23:49.999937] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.399 [2024-06-11 08:23:49.999986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.399 [2024-06-11 08:23:49.999999] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.399 [2024-06-11 08:23:50.000005] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.399 [2024-06-11 08:23:50.000011] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.399 [2024-06-11 08:23:50.000025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.399 qpair failed and we were unable to recover it. 00:31:19.399 [2024-06-11 08:23:50.009988] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.399 [2024-06-11 08:23:50.010030] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.399 [2024-06-11 08:23:50.010045] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.399 [2024-06-11 08:23:50.010051] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.399 [2024-06-11 08:23:50.010057] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.399 [2024-06-11 08:23:50.010071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.399 qpair failed and we were unable to recover it. 00:31:19.399 [2024-06-11 08:23:50.020021] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.399 [2024-06-11 08:23:50.020071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.399 [2024-06-11 08:23:50.020084] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.399 [2024-06-11 08:23:50.020091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.399 [2024-06-11 08:23:50.020097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.400 [2024-06-11 08:23:50.020110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.400 qpair failed and we were unable to recover it. 00:31:19.400 [2024-06-11 08:23:50.029908] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.400 [2024-06-11 08:23:50.029958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.400 [2024-06-11 08:23:50.029973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.400 [2024-06-11 08:23:50.029979] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.400 [2024-06-11 08:23:50.029985] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.400 [2024-06-11 08:23:50.030005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.400 qpair failed and we were unable to recover it. 00:31:19.400 [2024-06-11 08:23:50.040108] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.400 [2024-06-11 08:23:50.040185] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.400 [2024-06-11 08:23:50.040201] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.400 [2024-06-11 08:23:50.040208] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.400 [2024-06-11 08:23:50.040214] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.400 [2024-06-11 08:23:50.040228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.400 qpair failed and we were unable to recover it. 00:31:19.661 [2024-06-11 08:23:50.050064] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.661 [2024-06-11 08:23:50.050110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.661 [2024-06-11 08:23:50.050124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.661 [2024-06-11 08:23:50.050131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.661 [2024-06-11 08:23:50.050137] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.661 [2024-06-11 08:23:50.050151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.661 qpair failed and we were unable to recover it. 00:31:19.661 [2024-06-11 08:23:50.060009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.661 [2024-06-11 08:23:50.060058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.661 [2024-06-11 08:23:50.060072] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.662 [2024-06-11 08:23:50.060078] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.662 [2024-06-11 08:23:50.060084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.662 [2024-06-11 08:23:50.060097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.662 qpair failed and we were unable to recover it. 00:31:19.662 [2024-06-11 08:23:50.070200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.662 [2024-06-11 08:23:50.070263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.662 [2024-06-11 08:23:50.070277] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.662 [2024-06-11 08:23:50.070283] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.662 [2024-06-11 08:23:50.070293] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.662 [2024-06-11 08:23:50.070307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.662 qpair failed and we were unable to recover it. 00:31:19.662 [2024-06-11 08:23:50.080181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.662 [2024-06-11 08:23:50.080226] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.662 [2024-06-11 08:23:50.080239] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.662 [2024-06-11 08:23:50.080246] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.662 [2024-06-11 08:23:50.080252] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.662 [2024-06-11 08:23:50.080265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.662 qpair failed and we were unable to recover it. 00:31:19.662 [2024-06-11 08:23:50.090179] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.662 [2024-06-11 08:23:50.090262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.662 [2024-06-11 08:23:50.090275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.662 [2024-06-11 08:23:50.090282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.662 [2024-06-11 08:23:50.090288] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.662 [2024-06-11 08:23:50.090302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.662 qpair failed and we were unable to recover it. 00:31:19.662 [2024-06-11 08:23:50.100233] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.662 [2024-06-11 08:23:50.100281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.662 [2024-06-11 08:23:50.100295] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.662 [2024-06-11 08:23:50.100301] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.662 [2024-06-11 08:23:50.100307] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.662 [2024-06-11 08:23:50.100321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.662 qpair failed and we were unable to recover it. 00:31:19.662 [2024-06-11 08:23:50.110140] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.662 [2024-06-11 08:23:50.110197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.662 [2024-06-11 08:23:50.110211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.662 [2024-06-11 08:23:50.110218] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.662 [2024-06-11 08:23:50.110224] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.662 [2024-06-11 08:23:50.110239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.662 qpair failed and we were unable to recover it. 00:31:19.662 [2024-06-11 08:23:50.120169] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.662 [2024-06-11 08:23:50.120216] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.662 [2024-06-11 08:23:50.120229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.662 [2024-06-11 08:23:50.120237] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.662 [2024-06-11 08:23:50.120243] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.663 [2024-06-11 08:23:50.120257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.663 qpair failed and we were unable to recover it. 00:31:19.663 [2024-06-11 08:23:50.130328] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.663 [2024-06-11 08:23:50.130375] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.663 [2024-06-11 08:23:50.130389] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.663 [2024-06-11 08:23:50.130396] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.663 [2024-06-11 08:23:50.130402] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.663 [2024-06-11 08:23:50.130416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.663 qpair failed and we were unable to recover it. 00:31:19.663 [2024-06-11 08:23:50.140352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.663 [2024-06-11 08:23:50.140401] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.663 [2024-06-11 08:23:50.140414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.663 [2024-06-11 08:23:50.140421] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.663 [2024-06-11 08:23:50.140427] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.663 [2024-06-11 08:23:50.140446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.663 qpair failed and we were unable to recover it. 00:31:19.663 [2024-06-11 08:23:50.150399] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.663 [2024-06-11 08:23:50.150451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.663 [2024-06-11 08:23:50.150465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.663 [2024-06-11 08:23:50.150472] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.663 [2024-06-11 08:23:50.150478] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa788000b90 00:31:19.663 [2024-06-11 08:23:50.150492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.663 qpair failed and we were unable to recover it. 00:31:19.663 [2024-06-11 08:23:50.150866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x714600 is same with the state(5) to be set 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Write completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Write completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Write completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Write completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Write completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Write completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Write completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Write completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Write completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Write completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Write completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Write completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Write completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Write completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 [2024-06-11 08:23:50.151268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.663 [2024-06-11 08:23:50.160407] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.663 [2024-06-11 08:23:50.160466] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.663 [2024-06-11 08:23:50.160485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.663 [2024-06-11 08:23:50.160493] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.663 [2024-06-11 08:23:50.160500] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7168b0 00:31:19.663 [2024-06-11 08:23:50.160516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.663 qpair failed and we were unable to recover it. 00:31:19.663 [2024-06-11 08:23:50.170324] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.663 [2024-06-11 08:23:50.170375] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.663 [2024-06-11 08:23:50.170400] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.663 [2024-06-11 08:23:50.170408] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.663 [2024-06-11 08:23:50.170415] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7168b0 00:31:19.663 [2024-06-11 08:23:50.170432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.663 qpair failed and we were unable to recover it. 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Write completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Write completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Write completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Write completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Write completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Write completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Write completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.663 starting I/O failed 00:31:19.663 Read completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Write completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 [2024-06-11 08:23:50.171260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:19.664 [2024-06-11 08:23:50.180425] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.664 [2024-06-11 08:23:50.180532] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.664 [2024-06-11 08:23:50.180581] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.664 [2024-06-11 08:23:50.180603] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.664 [2024-06-11 08:23:50.180622] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa798000b90 00:31:19.664 [2024-06-11 08:23:50.180668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:19.664 qpair failed and we were unable to recover it. 00:31:19.664 [2024-06-11 08:23:50.190524] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.664 [2024-06-11 08:23:50.190591] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.664 [2024-06-11 08:23:50.190620] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.664 [2024-06-11 08:23:50.190634] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.664 [2024-06-11 08:23:50.190648] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa798000b90 00:31:19.664 [2024-06-11 08:23:50.190678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:19.664 qpair failed and we were unable to recover it. 00:31:19.664 Read completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Read completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Read completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Read completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Read completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Read completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Write completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Read completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Read completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Read completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Read completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Write completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Write completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Write completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Write completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Write completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Write completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Write completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Write completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Write completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Read completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Write completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Write completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Write completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Write completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Read completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Read completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Read completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Read completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Write completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Read completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 Read completed with error (sct=0, sc=8) 00:31:19.664 starting I/O failed 00:31:19.664 [2024-06-11 08:23:50.191057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.664 [2024-06-11 08:23:50.200383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.664 [2024-06-11 08:23:50.200430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.664 [2024-06-11 08:23:50.200448] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.664 [2024-06-11 08:23:50.200454] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.664 [2024-06-11 08:23:50.200458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa790000b90 00:31:19.664 [2024-06-11 08:23:50.200470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.664 qpair failed and we were unable to recover it. 00:31:19.664 [2024-06-11 08:23:50.210524] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:19.664 [2024-06-11 08:23:50.210576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:19.664 [2024-06-11 08:23:50.210594] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:19.664 [2024-06-11 08:23:50.210599] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:19.664 [2024-06-11 08:23:50.210604] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa790000b90 00:31:19.664 [2024-06-11 08:23:50.210617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.664 qpair failed and we were unable to recover it. 00:31:19.664 [2024-06-11 08:23:50.210966] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x714600 (9): Bad file descriptor 00:31:19.664 Initializing NVMe Controllers 00:31:19.664 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:19.664 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:19.664 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:19.664 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:19.664 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:19.664 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:19.664 Initialization complete. Launching workers. 00:31:19.664 Starting thread on core 1 00:31:19.664 Starting thread on core 2 00:31:19.664 Starting thread on core 3 00:31:19.664 Starting thread on core 0 00:31:19.664 08:23:50 -- host/target_disconnect.sh@59 -- # sync 00:31:19.664 00:31:19.664 real 0m11.456s 00:31:19.664 user 0m21.229s 00:31:19.664 sys 0m3.752s 00:31:19.664 08:23:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:19.664 08:23:50 -- common/autotest_common.sh@10 -- # set +x 00:31:19.664 ************************************ 00:31:19.664 END TEST nvmf_target_disconnect_tc2 00:31:19.664 ************************************ 00:31:19.664 08:23:50 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:31:19.664 08:23:50 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:19.664 08:23:50 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:31:19.664 08:23:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:19.664 08:23:50 -- nvmf/common.sh@116 -- # sync 00:31:19.664 08:23:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:19.664 08:23:50 -- nvmf/common.sh@119 -- # set +e 00:31:19.664 08:23:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:19.664 08:23:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:19.664 rmmod nvme_tcp 00:31:19.664 rmmod nvme_fabrics 00:31:19.664 rmmod nvme_keyring 00:31:19.924 08:23:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:19.924 08:23:50 -- nvmf/common.sh@123 -- # set -e 00:31:19.924 08:23:50 -- nvmf/common.sh@124 -- # return 0 00:31:19.924 08:23:50 -- nvmf/common.sh@477 -- # '[' -n 1260432 ']' 00:31:19.924 08:23:50 -- nvmf/common.sh@478 -- # killprocess 1260432 00:31:19.924 08:23:50 -- common/autotest_common.sh@926 -- # '[' -z 1260432 ']' 00:31:19.924 08:23:50 -- common/autotest_common.sh@930 -- # kill -0 1260432 00:31:19.924 08:23:50 -- common/autotest_common.sh@931 -- # uname 00:31:19.924 08:23:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:19.924 08:23:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1260432 00:31:19.924 08:23:50 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:31:19.924 08:23:50 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:31:19.924 08:23:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1260432' 00:31:19.924 killing process with pid 1260432 00:31:19.924 08:23:50 -- common/autotest_common.sh@945 -- # kill 1260432 00:31:19.924 08:23:50 -- common/autotest_common.sh@950 -- # wait 1260432 00:31:19.924 08:23:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:19.924 08:23:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:19.924 08:23:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:19.924 08:23:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:19.924 08:23:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:19.924 08:23:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.924 08:23:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:19.924 08:23:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.465 08:23:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:22.465 00:31:22.465 real 0m21.416s 00:31:22.465 user 0m49.170s 00:31:22.465 sys 0m9.527s 00:31:22.465 08:23:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:22.465 08:23:52 -- common/autotest_common.sh@10 -- # set +x 00:31:22.465 ************************************ 00:31:22.465 END TEST nvmf_target_disconnect 00:31:22.465 ************************************ 00:31:22.465 08:23:52 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:31:22.465 08:23:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:22.465 08:23:52 -- common/autotest_common.sh@10 -- # set +x 00:31:22.465 08:23:52 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:31:22.465 00:31:22.465 real 24m19.413s 00:31:22.465 user 64m32.114s 00:31:22.465 sys 6m31.638s 00:31:22.465 08:23:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:22.465 08:23:52 -- common/autotest_common.sh@10 -- # set +x 00:31:22.465 ************************************ 00:31:22.465 END TEST nvmf_tcp 00:31:22.465 ************************************ 00:31:22.465 08:23:52 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:31:22.465 08:23:52 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:22.465 08:23:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:22.465 08:23:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:22.465 08:23:52 -- common/autotest_common.sh@10 -- # set +x 00:31:22.465 ************************************ 00:31:22.465 START TEST spdkcli_nvmf_tcp 00:31:22.465 ************************************ 00:31:22.465 08:23:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:22.465 * Looking for test storage... 00:31:22.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:22.465 08:23:52 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:22.465 08:23:52 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:22.465 08:23:52 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:22.465 08:23:52 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.465 08:23:52 -- nvmf/common.sh@7 -- # uname -s 00:31:22.465 08:23:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.465 08:23:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.466 08:23:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.466 08:23:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.466 08:23:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.466 08:23:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.466 08:23:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.466 08:23:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.466 08:23:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.466 08:23:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.466 08:23:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:22.466 08:23:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:22.466 08:23:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.466 08:23:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.466 08:23:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.466 08:23:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.466 08:23:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.466 08:23:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.466 08:23:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.466 08:23:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.466 08:23:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.466 08:23:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.466 08:23:52 -- paths/export.sh@5 -- # export PATH 00:31:22.466 08:23:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.466 08:23:52 -- nvmf/common.sh@46 -- # : 0 00:31:22.466 08:23:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:22.466 08:23:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:22.466 08:23:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:22.466 08:23:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.466 08:23:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.466 08:23:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:22.466 08:23:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:22.466 08:23:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:22.466 08:23:52 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:22.466 08:23:52 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:22.466 08:23:52 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:22.466 08:23:52 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:22.466 08:23:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:22.466 08:23:52 -- common/autotest_common.sh@10 -- # set +x 00:31:22.466 08:23:52 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:22.466 08:23:52 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1262333 00:31:22.466 08:23:52 -- spdkcli/common.sh@34 -- # waitforlisten 1262333 00:31:22.466 08:23:52 -- common/autotest_common.sh@819 -- # '[' -z 1262333 ']' 00:31:22.466 08:23:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.466 08:23:52 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:22.466 08:23:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:22.466 08:23:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.466 08:23:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:22.466 08:23:52 -- common/autotest_common.sh@10 -- # set +x 00:31:22.466 [2024-06-11 08:23:52.880713] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:22.466 [2024-06-11 08:23:52.880784] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262333 ] 00:31:22.466 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.466 [2024-06-11 08:23:52.945740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:22.466 [2024-06-11 08:23:53.021739] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:22.466 [2024-06-11 08:23:53.021987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.466 [2024-06-11 08:23:53.021989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.035 08:23:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:23.035 08:23:53 -- common/autotest_common.sh@852 -- # return 0 00:31:23.035 08:23:53 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:23.035 08:23:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:23.035 08:23:53 -- common/autotest_common.sh@10 -- # set +x 00:31:23.035 08:23:53 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:23.035 08:23:53 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:23.035 08:23:53 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:23.035 08:23:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:23.035 08:23:53 -- common/autotest_common.sh@10 -- # set +x 00:31:23.296 08:23:53 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:23.296 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:23.296 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:23.296 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:23.296 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:23.296 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:23.296 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:23.296 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:23.296 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:23.296 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:23.296 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:23.296 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:23.296 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:23.296 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:23.296 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:23.296 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:23.296 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:23.296 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:23.296 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:23.296 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:23.296 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:23.296 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:23.296 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:23.296 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:23.296 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:23.296 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:23.296 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:23.296 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:23.296 ' 00:31:23.557 [2024-06-11 08:23:54.007201] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:31:25.517 [2024-06-11 08:23:56.011225] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:26.903 [2024-06-11 08:23:57.311446] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:29.453 [2024-06-11 08:23:59.722701] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:31.368 [2024-06-11 08:24:01.596399] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:32.752 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:32.752 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:32.752 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:32.752 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:32.752 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:32.752 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:32.752 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:32.752 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:32.752 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:32.752 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:32.752 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:32.752 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:32.752 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:32.752 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:32.752 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:32.752 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:32.752 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:32.752 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:32.752 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:32.752 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:32.752 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:32.752 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:32.752 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:32.752 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:32.752 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:32.752 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:32.752 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:32.752 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:32.752 08:24:03 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:32.752 08:24:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:32.752 08:24:03 -- common/autotest_common.sh@10 -- # set +x 00:31:32.752 08:24:03 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:32.752 08:24:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:32.752 08:24:03 -- common/autotest_common.sh@10 -- # set +x 00:31:32.752 08:24:03 -- spdkcli/nvmf.sh@69 -- # check_match 00:31:32.752 08:24:03 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:33.013 08:24:03 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:33.013 08:24:03 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:33.013 08:24:03 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:33.013 08:24:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:33.013 08:24:03 -- common/autotest_common.sh@10 -- # set +x 00:31:33.014 08:24:03 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:33.014 08:24:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:33.014 08:24:03 -- common/autotest_common.sh@10 -- # set +x 00:31:33.014 08:24:03 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:33.014 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:33.014 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:33.014 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:33.014 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:33.014 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:33.014 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:33.014 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:33.014 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:33.014 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:33.014 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:33.014 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:33.014 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:33.014 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:33.014 ' 00:31:38.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:38.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:38.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:38.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:38.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:38.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:38.299 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:38.299 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:38.299 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:38.299 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:38.299 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:38.299 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:38.299 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:38.299 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:38.299 08:24:08 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:38.299 08:24:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:38.299 08:24:08 -- common/autotest_common.sh@10 -- # set +x 00:31:38.299 08:24:08 -- spdkcli/nvmf.sh@90 -- # killprocess 1262333 00:31:38.299 08:24:08 -- common/autotest_common.sh@926 -- # '[' -z 1262333 ']' 00:31:38.299 08:24:08 -- common/autotest_common.sh@930 -- # kill -0 1262333 00:31:38.299 08:24:08 -- common/autotest_common.sh@931 -- # uname 00:31:38.299 08:24:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:38.299 08:24:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1262333 00:31:38.299 08:24:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:38.299 08:24:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:38.299 08:24:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1262333' 00:31:38.299 killing process with pid 1262333 00:31:38.299 08:24:08 -- common/autotest_common.sh@945 -- # kill 1262333 00:31:38.299 [2024-06-11 08:24:08.535224] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:31:38.299 08:24:08 -- common/autotest_common.sh@950 -- # wait 1262333 00:31:38.299 08:24:08 -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:38.299 08:24:08 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:38.299 08:24:08 -- spdkcli/common.sh@13 -- # '[' -n 1262333 ']' 00:31:38.299 08:24:08 -- spdkcli/common.sh@14 -- # killprocess 1262333 00:31:38.299 08:24:08 -- common/autotest_common.sh@926 -- # '[' -z 1262333 ']' 00:31:38.299 08:24:08 -- common/autotest_common.sh@930 -- # kill -0 1262333 00:31:38.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1262333) - No such process 00:31:38.299 08:24:08 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1262333 is not found' 00:31:38.299 Process with pid 1262333 is not found 00:31:38.299 08:24:08 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:38.299 08:24:08 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:38.299 08:24:08 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:38.299 00:31:38.299 real 0m15.966s 00:31:38.299 user 0m33.414s 00:31:38.299 sys 0m0.725s 00:31:38.299 08:24:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:38.299 08:24:08 -- common/autotest_common.sh@10 -- # set +x 00:31:38.299 ************************************ 00:31:38.299 END TEST spdkcli_nvmf_tcp 00:31:38.299 ************************************ 00:31:38.299 08:24:08 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:38.299 08:24:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:38.299 08:24:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:38.299 08:24:08 -- common/autotest_common.sh@10 -- # set +x 00:31:38.299 ************************************ 00:31:38.299 START TEST nvmf_identify_passthru 00:31:38.299 ************************************ 00:31:38.299 08:24:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:38.299 * Looking for test storage... 00:31:38.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:38.299 08:24:08 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:38.299 08:24:08 -- nvmf/common.sh@7 -- # uname -s 00:31:38.299 08:24:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:38.299 08:24:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:38.299 08:24:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:38.299 08:24:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:38.299 08:24:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:38.299 08:24:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:38.299 08:24:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:38.299 08:24:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:38.299 08:24:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:38.299 08:24:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:38.299 08:24:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:38.299 08:24:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:38.299 08:24:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:38.299 08:24:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:38.299 08:24:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:38.299 08:24:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:38.299 08:24:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:38.299 08:24:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:38.299 08:24:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:38.299 08:24:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.299 08:24:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.299 08:24:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.299 08:24:08 -- paths/export.sh@5 -- # export PATH 00:31:38.299 08:24:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.299 08:24:08 -- nvmf/common.sh@46 -- # : 0 00:31:38.299 08:24:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:38.299 08:24:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:38.299 08:24:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:38.299 08:24:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:38.299 08:24:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:38.299 08:24:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:38.299 08:24:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:38.299 08:24:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:38.299 08:24:08 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:38.300 08:24:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:38.300 08:24:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:38.300 08:24:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:38.300 08:24:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.300 08:24:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.300 08:24:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.300 08:24:08 -- paths/export.sh@5 -- # export PATH 00:31:38.300 08:24:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.300 08:24:08 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:38.300 08:24:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:38.300 08:24:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:38.300 08:24:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:38.300 08:24:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:38.300 08:24:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:38.300 08:24:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.300 08:24:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:38.300 08:24:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.300 08:24:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:38.300 08:24:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:38.300 08:24:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:38.300 08:24:08 -- common/autotest_common.sh@10 -- # set +x 00:31:46.440 08:24:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:46.440 08:24:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:46.440 08:24:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:46.440 08:24:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:46.440 08:24:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:46.440 08:24:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:46.440 08:24:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:46.440 08:24:15 -- nvmf/common.sh@294 -- # net_devs=() 00:31:46.440 08:24:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:46.440 08:24:15 -- nvmf/common.sh@295 -- # e810=() 00:31:46.440 08:24:15 -- nvmf/common.sh@295 -- # local -ga e810 00:31:46.440 08:24:15 -- nvmf/common.sh@296 -- # x722=() 00:31:46.440 08:24:15 -- nvmf/common.sh@296 -- # local -ga x722 00:31:46.440 08:24:15 -- nvmf/common.sh@297 -- # mlx=() 00:31:46.440 08:24:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:46.440 08:24:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:46.440 08:24:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:46.440 08:24:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:46.440 08:24:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:46.440 08:24:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:46.440 08:24:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:46.440 08:24:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:46.440 08:24:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:46.440 08:24:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:46.440 08:24:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:46.440 08:24:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:46.440 08:24:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:46.440 08:24:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:46.440 08:24:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:46.440 08:24:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:46.440 08:24:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:46.440 08:24:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:46.440 08:24:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:46.440 08:24:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:46.440 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:46.440 08:24:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:46.440 08:24:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:46.440 08:24:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.440 08:24:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.440 08:24:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:46.440 08:24:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:46.440 08:24:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:46.440 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:46.440 08:24:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:46.440 08:24:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:46.440 08:24:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.440 08:24:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.440 08:24:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:46.440 08:24:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:46.440 08:24:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:46.440 08:24:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:46.440 08:24:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:46.440 08:24:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.440 08:24:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:46.440 08:24:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.440 08:24:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:46.440 Found net devices under 0000:31:00.0: cvl_0_0 00:31:46.440 08:24:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.440 08:24:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:46.440 08:24:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.440 08:24:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:46.440 08:24:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.440 08:24:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:46.440 Found net devices under 0000:31:00.1: cvl_0_1 00:31:46.440 08:24:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.440 08:24:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:46.440 08:24:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:46.440 08:24:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:46.440 08:24:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:46.440 08:24:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:46.440 08:24:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:46.440 08:24:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:46.440 08:24:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:46.440 08:24:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:46.440 08:24:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:46.440 08:24:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:46.440 08:24:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:46.440 08:24:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:46.440 08:24:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:46.440 08:24:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:46.440 08:24:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:46.440 08:24:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:46.440 08:24:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:46.440 08:24:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:46.440 08:24:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:46.440 08:24:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:46.440 08:24:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:46.440 08:24:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:46.440 08:24:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:46.440 08:24:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:46.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:46.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:31:46.440 00:31:46.440 --- 10.0.0.2 ping statistics --- 00:31:46.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.440 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:31:46.440 08:24:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:46.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:46.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:31:46.440 00:31:46.440 --- 10.0.0.1 ping statistics --- 00:31:46.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.440 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:31:46.440 08:24:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:46.440 08:24:15 -- nvmf/common.sh@410 -- # return 0 00:31:46.440 08:24:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:46.440 08:24:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:46.440 08:24:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:46.440 08:24:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:46.440 08:24:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:46.440 08:24:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:46.440 08:24:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:46.440 08:24:15 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:46.440 08:24:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:46.440 08:24:15 -- common/autotest_common.sh@10 -- # set +x 00:31:46.440 08:24:15 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:46.440 08:24:15 -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:46.440 08:24:15 -- common/autotest_common.sh@1509 -- # local bdfs 00:31:46.440 08:24:15 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:46.440 08:24:15 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:46.440 08:24:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:46.440 08:24:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:46.440 08:24:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:46.440 08:24:15 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:46.441 08:24:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:46.441 08:24:16 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:46.441 08:24:16 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:31:46.441 08:24:16 -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:31:46.441 08:24:16 -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:31:46.441 08:24:16 -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:31:46.441 08:24:16 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:31:46.441 08:24:16 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:46.441 08:24:16 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:46.441 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.441 08:24:16 -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:31:46.441 08:24:16 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:31:46.441 08:24:16 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:46.441 08:24:16 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:46.441 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.441 08:24:17 -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:31:46.441 08:24:17 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:46.441 08:24:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:46.441 08:24:17 -- common/autotest_common.sh@10 -- # set +x 00:31:46.441 08:24:17 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:46.441 08:24:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:46.441 08:24:17 -- common/autotest_common.sh@10 -- # set +x 00:31:46.441 08:24:17 -- target/identify_passthru.sh@31 -- # nvmfpid=1269979 00:31:46.441 08:24:17 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:46.441 08:24:17 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:46.441 08:24:17 -- target/identify_passthru.sh@35 -- # waitforlisten 1269979 00:31:46.441 08:24:17 -- common/autotest_common.sh@819 -- # '[' -z 1269979 ']' 00:31:46.441 08:24:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:46.441 08:24:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:46.441 08:24:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:46.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:46.441 08:24:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:46.441 08:24:17 -- common/autotest_common.sh@10 -- # set +x 00:31:46.702 [2024-06-11 08:24:17.114363] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:46.702 [2024-06-11 08:24:17.114410] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:46.702 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.702 [2024-06-11 08:24:17.179999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:46.702 [2024-06-11 08:24:17.244691] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:46.702 [2024-06-11 08:24:17.244819] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:46.702 [2024-06-11 08:24:17.244829] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:46.702 [2024-06-11 08:24:17.244837] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:46.702 [2024-06-11 08:24:17.244974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:46.702 [2024-06-11 08:24:17.245094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:46.702 [2024-06-11 08:24:17.245249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.702 [2024-06-11 08:24:17.245250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:47.275 08:24:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:47.275 08:24:17 -- common/autotest_common.sh@852 -- # return 0 00:31:47.275 08:24:17 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:47.275 08:24:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:47.275 08:24:17 -- common/autotest_common.sh@10 -- # set +x 00:31:47.275 INFO: Log level set to 20 00:31:47.275 INFO: Requests: 00:31:47.275 { 00:31:47.275 "jsonrpc": "2.0", 00:31:47.275 "method": "nvmf_set_config", 00:31:47.275 "id": 1, 00:31:47.275 "params": { 00:31:47.275 "admin_cmd_passthru": { 00:31:47.275 "identify_ctrlr": true 00:31:47.275 } 00:31:47.275 } 00:31:47.275 } 00:31:47.275 00:31:47.275 INFO: response: 00:31:47.275 { 00:31:47.275 "jsonrpc": "2.0", 00:31:47.275 "id": 1, 00:31:47.275 "result": true 00:31:47.275 } 00:31:47.275 00:31:47.275 08:24:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:47.275 08:24:17 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:47.275 08:24:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:47.275 08:24:17 -- common/autotest_common.sh@10 -- # set +x 00:31:47.275 INFO: Setting log level to 20 00:31:47.275 INFO: Setting log level to 20 00:31:47.275 INFO: Log level set to 20 00:31:47.275 INFO: Log level set to 20 00:31:47.275 INFO: Requests: 00:31:47.275 { 00:31:47.275 "jsonrpc": "2.0", 00:31:47.275 "method": "framework_start_init", 00:31:47.275 "id": 1 00:31:47.275 } 00:31:47.275 00:31:47.275 INFO: Requests: 00:31:47.275 { 00:31:47.275 "jsonrpc": "2.0", 00:31:47.275 "method": "framework_start_init", 00:31:47.275 "id": 1 00:31:47.275 } 00:31:47.275 00:31:47.536 [2024-06-11 08:24:17.958864] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:47.536 INFO: response: 00:31:47.536 { 00:31:47.536 "jsonrpc": "2.0", 00:31:47.536 "id": 1, 00:31:47.536 "result": true 00:31:47.536 } 00:31:47.536 00:31:47.536 INFO: response: 00:31:47.536 { 00:31:47.536 "jsonrpc": "2.0", 00:31:47.536 "id": 1, 00:31:47.536 "result": true 00:31:47.536 } 00:31:47.536 00:31:47.536 08:24:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:47.536 08:24:17 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:47.536 08:24:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:47.536 08:24:17 -- common/autotest_common.sh@10 -- # set +x 00:31:47.536 INFO: Setting log level to 40 00:31:47.536 INFO: Setting log level to 40 00:31:47.536 INFO: Setting log level to 40 00:31:47.536 [2024-06-11 08:24:17.972120] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:47.536 08:24:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:47.536 08:24:17 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:47.536 08:24:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:47.536 08:24:17 -- common/autotest_common.sh@10 -- # set +x 00:31:47.536 08:24:18 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:31:47.536 08:24:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:47.536 08:24:18 -- common/autotest_common.sh@10 -- # set +x 00:31:47.796 Nvme0n1 00:31:47.796 08:24:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:47.796 08:24:18 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:47.796 08:24:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:47.796 08:24:18 -- common/autotest_common.sh@10 -- # set +x 00:31:47.796 08:24:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:47.796 08:24:18 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:47.796 08:24:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:47.796 08:24:18 -- common/autotest_common.sh@10 -- # set +x 00:31:47.796 08:24:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:47.796 08:24:18 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:47.796 08:24:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:47.796 08:24:18 -- common/autotest_common.sh@10 -- # set +x 00:31:47.796 [2024-06-11 08:24:18.356695] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:47.796 08:24:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:47.796 08:24:18 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:47.796 08:24:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:47.797 08:24:18 -- common/autotest_common.sh@10 -- # set +x 00:31:47.797 [2024-06-11 08:24:18.368491] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:31:47.797 [ 00:31:47.797 { 00:31:47.797 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:47.797 "subtype": "Discovery", 00:31:47.797 "listen_addresses": [], 00:31:47.797 "allow_any_host": true, 00:31:47.797 "hosts": [] 00:31:47.797 }, 00:31:47.797 { 00:31:47.797 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:47.797 "subtype": "NVMe", 00:31:47.797 "listen_addresses": [ 00:31:47.797 { 00:31:47.797 "transport": "TCP", 00:31:47.797 "trtype": "TCP", 00:31:47.797 "adrfam": "IPv4", 00:31:47.797 "traddr": "10.0.0.2", 00:31:47.797 "trsvcid": "4420" 00:31:47.797 } 00:31:47.797 ], 00:31:47.797 "allow_any_host": true, 00:31:47.797 "hosts": [], 00:31:47.797 "serial_number": "SPDK00000000000001", 00:31:47.797 "model_number": "SPDK bdev Controller", 00:31:47.797 "max_namespaces": 1, 00:31:47.797 "min_cntlid": 1, 00:31:47.797 "max_cntlid": 65519, 00:31:47.797 "namespaces": [ 00:31:47.797 { 00:31:47.797 "nsid": 1, 00:31:47.797 "bdev_name": "Nvme0n1", 00:31:47.797 "name": "Nvme0n1", 00:31:47.797 "nguid": "36344730526054940025384500000027", 00:31:47.797 "uuid": "36344730-5260-5494-0025-384500000027" 00:31:47.797 } 00:31:47.797 ] 00:31:47.797 } 00:31:47.797 ] 00:31:47.797 08:24:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:47.797 08:24:18 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:47.797 08:24:18 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:47.797 08:24:18 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:47.797 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.059 08:24:18 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:31:48.059 08:24:18 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:48.059 08:24:18 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:48.059 08:24:18 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:48.059 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.319 08:24:18 -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:31:48.319 08:24:18 -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:31:48.319 08:24:18 -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:31:48.319 08:24:18 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:48.319 08:24:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:48.319 08:24:18 -- common/autotest_common.sh@10 -- # set +x 00:31:48.319 08:24:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:48.319 08:24:18 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:48.319 08:24:18 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:48.319 08:24:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:48.319 08:24:18 -- nvmf/common.sh@116 -- # sync 00:31:48.319 08:24:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:48.319 08:24:18 -- nvmf/common.sh@119 -- # set +e 00:31:48.319 08:24:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:48.319 08:24:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:48.319 rmmod nvme_tcp 00:31:48.319 rmmod nvme_fabrics 00:31:48.319 rmmod nvme_keyring 00:31:48.319 08:24:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:48.319 08:24:18 -- nvmf/common.sh@123 -- # set -e 00:31:48.319 08:24:18 -- nvmf/common.sh@124 -- # return 0 00:31:48.319 08:24:18 -- nvmf/common.sh@477 -- # '[' -n 1269979 ']' 00:31:48.319 08:24:18 -- nvmf/common.sh@478 -- # killprocess 1269979 00:31:48.319 08:24:18 -- common/autotest_common.sh@926 -- # '[' -z 1269979 ']' 00:31:48.319 08:24:18 -- common/autotest_common.sh@930 -- # kill -0 1269979 00:31:48.319 08:24:18 -- common/autotest_common.sh@931 -- # uname 00:31:48.319 08:24:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:48.319 08:24:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1269979 00:31:48.319 08:24:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:48.319 08:24:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:48.319 08:24:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1269979' 00:31:48.319 killing process with pid 1269979 00:31:48.319 08:24:18 -- common/autotest_common.sh@945 -- # kill 1269979 00:31:48.319 [2024-06-11 08:24:18.872975] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:31:48.319 08:24:18 -- common/autotest_common.sh@950 -- # wait 1269979 00:31:48.580 08:24:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:48.580 08:24:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:48.580 08:24:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:48.580 08:24:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:48.580 08:24:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:48.580 08:24:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.580 08:24:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:48.580 08:24:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:51.128 08:24:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:51.128 00:31:51.128 real 0m12.493s 00:31:51.128 user 0m9.880s 00:31:51.128 sys 0m5.933s 00:31:51.128 08:24:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:51.128 08:24:21 -- common/autotest_common.sh@10 -- # set +x 00:31:51.128 ************************************ 00:31:51.128 END TEST nvmf_identify_passthru 00:31:51.128 ************************************ 00:31:51.128 08:24:21 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:51.128 08:24:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:51.128 08:24:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:51.128 08:24:21 -- common/autotest_common.sh@10 -- # set +x 00:31:51.128 ************************************ 00:31:51.128 START TEST nvmf_dif 00:31:51.128 ************************************ 00:31:51.128 08:24:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:51.128 * Looking for test storage... 00:31:51.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:51.128 08:24:21 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:51.128 08:24:21 -- nvmf/common.sh@7 -- # uname -s 00:31:51.128 08:24:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:51.128 08:24:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:51.128 08:24:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:51.128 08:24:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:51.128 08:24:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:51.128 08:24:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:51.128 08:24:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:51.128 08:24:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:51.128 08:24:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:51.128 08:24:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:51.128 08:24:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:51.128 08:24:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:51.128 08:24:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:51.128 08:24:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:51.128 08:24:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:51.128 08:24:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:51.128 08:24:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:51.128 08:24:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:51.128 08:24:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:51.128 08:24:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.128 08:24:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.128 08:24:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.128 08:24:21 -- paths/export.sh@5 -- # export PATH 00:31:51.128 08:24:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:51.128 08:24:21 -- nvmf/common.sh@46 -- # : 0 00:31:51.128 08:24:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:51.128 08:24:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:51.128 08:24:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:51.128 08:24:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:51.128 08:24:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:51.128 08:24:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:51.128 08:24:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:51.128 08:24:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:51.128 08:24:21 -- target/dif.sh@15 -- # NULL_META=16 00:31:51.128 08:24:21 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:51.128 08:24:21 -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:51.128 08:24:21 -- target/dif.sh@15 -- # NULL_DIF=1 00:31:51.128 08:24:21 -- target/dif.sh@135 -- # nvmftestinit 00:31:51.128 08:24:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:51.128 08:24:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:51.128 08:24:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:51.128 08:24:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:51.128 08:24:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:51.128 08:24:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.128 08:24:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:51.128 08:24:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:51.128 08:24:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:51.128 08:24:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:51.128 08:24:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:51.128 08:24:21 -- common/autotest_common.sh@10 -- # set +x 00:31:57.717 08:24:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:57.717 08:24:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:57.717 08:24:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:57.717 08:24:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:57.717 08:24:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:57.717 08:24:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:57.717 08:24:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:57.717 08:24:28 -- nvmf/common.sh@294 -- # net_devs=() 00:31:57.717 08:24:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:57.717 08:24:28 -- nvmf/common.sh@295 -- # e810=() 00:31:57.717 08:24:28 -- nvmf/common.sh@295 -- # local -ga e810 00:31:57.717 08:24:28 -- nvmf/common.sh@296 -- # x722=() 00:31:57.717 08:24:28 -- nvmf/common.sh@296 -- # local -ga x722 00:31:57.717 08:24:28 -- nvmf/common.sh@297 -- # mlx=() 00:31:57.717 08:24:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:57.717 08:24:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:57.717 08:24:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:57.717 08:24:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:57.717 08:24:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:57.717 08:24:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:57.717 08:24:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:57.717 08:24:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:57.717 08:24:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:57.717 08:24:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:57.717 08:24:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:57.717 08:24:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:57.717 08:24:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:57.717 08:24:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:57.717 08:24:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:57.717 08:24:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:57.717 08:24:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:57.717 08:24:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:57.717 08:24:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:57.717 08:24:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:57.717 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:57.717 08:24:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:57.717 08:24:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:57.717 08:24:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.717 08:24:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.717 08:24:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:57.717 08:24:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:57.717 08:24:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:57.717 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:57.717 08:24:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:57.717 08:24:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:57.717 08:24:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.717 08:24:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.717 08:24:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:57.717 08:24:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:57.717 08:24:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:57.717 08:24:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:57.717 08:24:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:57.717 08:24:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.717 08:24:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:57.717 08:24:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.717 08:24:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:57.717 Found net devices under 0000:31:00.0: cvl_0_0 00:31:57.717 08:24:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.717 08:24:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:57.717 08:24:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.717 08:24:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:57.717 08:24:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.717 08:24:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:57.717 Found net devices under 0000:31:00.1: cvl_0_1 00:31:57.717 08:24:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.717 08:24:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:57.718 08:24:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:57.718 08:24:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:57.718 08:24:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:57.718 08:24:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:57.718 08:24:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:57.718 08:24:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:57.718 08:24:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:57.718 08:24:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:57.718 08:24:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:57.718 08:24:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:57.718 08:24:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:57.718 08:24:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:57.718 08:24:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:57.718 08:24:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:57.718 08:24:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:57.718 08:24:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:57.718 08:24:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:57.718 08:24:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:57.718 08:24:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:57.718 08:24:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:57.979 08:24:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:57.979 08:24:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:57.979 08:24:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:57.979 08:24:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:57.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:57.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:31:57.979 00:31:57.979 --- 10.0.0.2 ping statistics --- 00:31:57.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.979 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:31:57.979 08:24:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:57.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:57.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:31:57.979 00:31:57.979 --- 10.0.0.1 ping statistics --- 00:31:57.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.979 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:31:57.979 08:24:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:57.979 08:24:28 -- nvmf/common.sh@410 -- # return 0 00:31:57.979 08:24:28 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:31:57.979 08:24:28 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:01.282 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:01.282 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:01.282 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:01.282 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:01.282 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:01.282 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:01.282 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:01.282 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:01.282 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:01.282 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:32:01.282 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:01.282 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:01.282 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:01.282 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:01.282 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:01.282 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:01.282 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:01.543 08:24:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:01.543 08:24:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:01.543 08:24:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:01.543 08:24:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:01.543 08:24:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:01.543 08:24:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:01.543 08:24:32 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:01.543 08:24:32 -- target/dif.sh@137 -- # nvmfappstart 00:32:01.543 08:24:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:01.543 08:24:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:01.543 08:24:32 -- common/autotest_common.sh@10 -- # set +x 00:32:01.543 08:24:32 -- nvmf/common.sh@469 -- # nvmfpid=1275986 00:32:01.543 08:24:32 -- nvmf/common.sh@470 -- # waitforlisten 1275986 00:32:01.543 08:24:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:01.543 08:24:32 -- common/autotest_common.sh@819 -- # '[' -z 1275986 ']' 00:32:01.543 08:24:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:01.543 08:24:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:01.543 08:24:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:01.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:01.543 08:24:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:01.543 08:24:32 -- common/autotest_common.sh@10 -- # set +x 00:32:01.543 [2024-06-11 08:24:32.078643] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:01.543 [2024-06-11 08:24:32.078699] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:01.543 EAL: No free 2048 kB hugepages reported on node 1 00:32:01.543 [2024-06-11 08:24:32.149922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.803 [2024-06-11 08:24:32.222529] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:01.803 [2024-06-11 08:24:32.222648] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:01.803 [2024-06-11 08:24:32.222656] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:01.803 [2024-06-11 08:24:32.222663] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:01.803 [2024-06-11 08:24:32.222682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.375 08:24:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:02.375 08:24:32 -- common/autotest_common.sh@852 -- # return 0 00:32:02.375 08:24:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:02.375 08:24:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:02.375 08:24:32 -- common/autotest_common.sh@10 -- # set +x 00:32:02.375 08:24:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:02.375 08:24:32 -- target/dif.sh@139 -- # create_transport 00:32:02.375 08:24:32 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:02.375 08:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:02.375 08:24:32 -- common/autotest_common.sh@10 -- # set +x 00:32:02.375 [2024-06-11 08:24:32.881623] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:02.375 08:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:02.375 08:24:32 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:02.375 08:24:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:02.375 08:24:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:02.375 08:24:32 -- common/autotest_common.sh@10 -- # set +x 00:32:02.375 ************************************ 00:32:02.375 START TEST fio_dif_1_default 00:32:02.375 ************************************ 00:32:02.375 08:24:32 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:32:02.375 08:24:32 -- target/dif.sh@86 -- # create_subsystems 0 00:32:02.375 08:24:32 -- target/dif.sh@28 -- # local sub 00:32:02.375 08:24:32 -- target/dif.sh@30 -- # for sub in "$@" 00:32:02.375 08:24:32 -- target/dif.sh@31 -- # create_subsystem 0 00:32:02.375 08:24:32 -- target/dif.sh@18 -- # local sub_id=0 00:32:02.375 08:24:32 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:02.375 08:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:02.375 08:24:32 -- common/autotest_common.sh@10 -- # set +x 00:32:02.375 bdev_null0 00:32:02.375 08:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:02.375 08:24:32 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:02.375 08:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:02.375 08:24:32 -- common/autotest_common.sh@10 -- # set +x 00:32:02.375 08:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:02.375 08:24:32 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:02.375 08:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:02.375 08:24:32 -- common/autotest_common.sh@10 -- # set +x 00:32:02.375 08:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:02.375 08:24:32 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:02.375 08:24:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:02.375 08:24:32 -- common/autotest_common.sh@10 -- # set +x 00:32:02.375 [2024-06-11 08:24:32.937894] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:02.375 08:24:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:02.375 08:24:32 -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:02.375 08:24:32 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:02.375 08:24:32 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:02.375 08:24:32 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:02.375 08:24:32 -- nvmf/common.sh@520 -- # config=() 00:32:02.375 08:24:32 -- nvmf/common.sh@520 -- # local subsystem config 00:32:02.375 08:24:32 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:02.375 08:24:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:02.375 08:24:32 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:02.375 08:24:32 -- target/dif.sh@82 -- # gen_fio_conf 00:32:02.375 08:24:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:02.375 { 00:32:02.375 "params": { 00:32:02.375 "name": "Nvme$subsystem", 00:32:02.375 "trtype": "$TEST_TRANSPORT", 00:32:02.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:02.375 "adrfam": "ipv4", 00:32:02.375 "trsvcid": "$NVMF_PORT", 00:32:02.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:02.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:02.375 "hdgst": ${hdgst:-false}, 00:32:02.375 "ddgst": ${ddgst:-false} 00:32:02.375 }, 00:32:02.375 "method": "bdev_nvme_attach_controller" 00:32:02.375 } 00:32:02.375 EOF 00:32:02.375 )") 00:32:02.375 08:24:32 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:02.375 08:24:32 -- target/dif.sh@54 -- # local file 00:32:02.375 08:24:32 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:02.375 08:24:32 -- target/dif.sh@56 -- # cat 00:32:02.375 08:24:32 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:02.375 08:24:32 -- common/autotest_common.sh@1320 -- # shift 00:32:02.375 08:24:32 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:02.375 08:24:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:02.375 08:24:32 -- nvmf/common.sh@542 -- # cat 00:32:02.375 08:24:32 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:02.375 08:24:32 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:02.375 08:24:32 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:02.375 08:24:32 -- target/dif.sh@72 -- # (( file <= files )) 00:32:02.375 08:24:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:02.375 08:24:32 -- nvmf/common.sh@544 -- # jq . 00:32:02.375 08:24:32 -- nvmf/common.sh@545 -- # IFS=, 00:32:02.375 08:24:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:02.375 "params": { 00:32:02.375 "name": "Nvme0", 00:32:02.375 "trtype": "tcp", 00:32:02.375 "traddr": "10.0.0.2", 00:32:02.375 "adrfam": "ipv4", 00:32:02.375 "trsvcid": "4420", 00:32:02.375 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:02.375 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:02.375 "hdgst": false, 00:32:02.375 "ddgst": false 00:32:02.375 }, 00:32:02.375 "method": "bdev_nvme_attach_controller" 00:32:02.375 }' 00:32:02.375 08:24:32 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:02.375 08:24:32 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:02.375 08:24:32 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:02.375 08:24:32 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:02.375 08:24:32 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:02.375 08:24:32 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:02.375 08:24:33 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:02.375 08:24:33 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:02.375 08:24:33 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:02.375 08:24:33 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:02.944 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:02.944 fio-3.35 00:32:02.944 Starting 1 thread 00:32:02.944 EAL: No free 2048 kB hugepages reported on node 1 00:32:03.205 [2024-06-11 08:24:33.682346] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:03.205 [2024-06-11 08:24:33.682392] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:13.201 00:32:13.201 filename0: (groupid=0, jobs=1): err= 0: pid=1276489: Tue Jun 11 08:24:43 2024 00:32:13.201 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10010msec) 00:32:13.201 slat (nsec): min=5370, max=63072, avg=6188.23, stdev=1892.30 00:32:13.201 clat (usec): min=551, max=42860, avg=21099.22, stdev=20128.37 00:32:13.201 lat (usec): min=556, max=42865, avg=21105.40, stdev=20128.35 00:32:13.201 clat percentiles (usec): 00:32:13.201 | 1.00th=[ 619], 5.00th=[ 758], 10.00th=[ 881], 20.00th=[ 906], 00:32:13.201 | 30.00th=[ 922], 40.00th=[ 938], 50.00th=[40633], 60.00th=[41157], 00:32:13.201 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:13.201 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:32:13.201 | 99.99th=[42730] 00:32:13.201 bw ( KiB/s): min= 672, max= 768, per=99.78%, avg=756.80, stdev=28.00, samples=20 00:32:13.201 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:32:13.201 lat (usec) : 750=4.91%, 1000=44.78% 00:32:13.201 lat (msec) : 2=0.11%, 50=50.21% 00:32:13.201 cpu : usr=95.59%, sys=4.18%, ctx=16, majf=0, minf=300 00:32:13.202 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.202 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.202 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:13.202 00:32:13.202 Run status group 0 (all jobs): 00:32:13.202 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7584KiB (7766kB), run=10010-10010msec 00:32:13.462 08:24:43 -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:13.462 08:24:43 -- target/dif.sh@43 -- # local sub 00:32:13.462 08:24:43 -- target/dif.sh@45 -- # for sub in "$@" 00:32:13.462 08:24:43 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:13.462 08:24:43 -- target/dif.sh@36 -- # local sub_id=0 00:32:13.462 08:24:43 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:13.462 08:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.462 08:24:43 -- common/autotest_common.sh@10 -- # set +x 00:32:13.462 08:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.462 08:24:43 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:13.462 08:24:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.462 08:24:43 -- common/autotest_common.sh@10 -- # set +x 00:32:13.462 08:24:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.462 00:32:13.462 real 0m11.073s 00:32:13.462 user 0m21.547s 00:32:13.462 sys 0m0.719s 00:32:13.462 08:24:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:13.462 08:24:43 -- common/autotest_common.sh@10 -- # set +x 00:32:13.462 ************************************ 00:32:13.462 END TEST fio_dif_1_default 00:32:13.462 ************************************ 00:32:13.462 08:24:44 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:13.462 08:24:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:13.462 08:24:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:13.462 08:24:44 -- common/autotest_common.sh@10 -- # set +x 00:32:13.462 ************************************ 00:32:13.462 START TEST fio_dif_1_multi_subsystems 00:32:13.462 ************************************ 00:32:13.462 08:24:44 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:32:13.462 08:24:44 -- target/dif.sh@92 -- # local files=1 00:32:13.462 08:24:44 -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:13.462 08:24:44 -- target/dif.sh@28 -- # local sub 00:32:13.462 08:24:44 -- target/dif.sh@30 -- # for sub in "$@" 00:32:13.462 08:24:44 -- target/dif.sh@31 -- # create_subsystem 0 00:32:13.462 08:24:44 -- target/dif.sh@18 -- # local sub_id=0 00:32:13.462 08:24:44 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:13.462 08:24:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.462 08:24:44 -- common/autotest_common.sh@10 -- # set +x 00:32:13.462 bdev_null0 00:32:13.462 08:24:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.462 08:24:44 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:13.462 08:24:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.462 08:24:44 -- common/autotest_common.sh@10 -- # set +x 00:32:13.462 08:24:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.462 08:24:44 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:13.462 08:24:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.462 08:24:44 -- common/autotest_common.sh@10 -- # set +x 00:32:13.462 08:24:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.462 08:24:44 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:13.462 08:24:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.462 08:24:44 -- common/autotest_common.sh@10 -- # set +x 00:32:13.462 [2024-06-11 08:24:44.058310] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.462 08:24:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.462 08:24:44 -- target/dif.sh@30 -- # for sub in "$@" 00:32:13.462 08:24:44 -- target/dif.sh@31 -- # create_subsystem 1 00:32:13.462 08:24:44 -- target/dif.sh@18 -- # local sub_id=1 00:32:13.462 08:24:44 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:13.462 08:24:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.462 08:24:44 -- common/autotest_common.sh@10 -- # set +x 00:32:13.462 bdev_null1 00:32:13.462 08:24:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.462 08:24:44 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:13.462 08:24:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.462 08:24:44 -- common/autotest_common.sh@10 -- # set +x 00:32:13.462 08:24:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.462 08:24:44 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:13.462 08:24:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.462 08:24:44 -- common/autotest_common.sh@10 -- # set +x 00:32:13.462 08:24:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.462 08:24:44 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:13.462 08:24:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.462 08:24:44 -- common/autotest_common.sh@10 -- # set +x 00:32:13.723 08:24:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.723 08:24:44 -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:13.723 08:24:44 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:13.723 08:24:44 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:13.723 08:24:44 -- nvmf/common.sh@520 -- # config=() 00:32:13.723 08:24:44 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:13.723 08:24:44 -- nvmf/common.sh@520 -- # local subsystem config 00:32:13.723 08:24:44 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:13.723 08:24:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:13.723 08:24:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:13.723 { 00:32:13.723 "params": { 00:32:13.723 "name": "Nvme$subsystem", 00:32:13.723 "trtype": "$TEST_TRANSPORT", 00:32:13.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:13.723 "adrfam": "ipv4", 00:32:13.723 "trsvcid": "$NVMF_PORT", 00:32:13.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:13.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:13.723 "hdgst": ${hdgst:-false}, 00:32:13.723 "ddgst": ${ddgst:-false} 00:32:13.723 }, 00:32:13.723 "method": "bdev_nvme_attach_controller" 00:32:13.723 } 00:32:13.723 EOF 00:32:13.723 )") 00:32:13.723 08:24:44 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:13.723 08:24:44 -- target/dif.sh@82 -- # gen_fio_conf 00:32:13.723 08:24:44 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:13.723 08:24:44 -- target/dif.sh@54 -- # local file 00:32:13.723 08:24:44 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:13.723 08:24:44 -- target/dif.sh@56 -- # cat 00:32:13.723 08:24:44 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:13.723 08:24:44 -- common/autotest_common.sh@1320 -- # shift 00:32:13.723 08:24:44 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:13.723 08:24:44 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:13.723 08:24:44 -- nvmf/common.sh@542 -- # cat 00:32:13.723 08:24:44 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:13.723 08:24:44 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:13.723 08:24:44 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:13.723 08:24:44 -- target/dif.sh@72 -- # (( file <= files )) 00:32:13.723 08:24:44 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:13.723 08:24:44 -- target/dif.sh@73 -- # cat 00:32:13.723 08:24:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:13.723 08:24:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:13.723 { 00:32:13.723 "params": { 00:32:13.723 "name": "Nvme$subsystem", 00:32:13.723 "trtype": "$TEST_TRANSPORT", 00:32:13.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:13.723 "adrfam": "ipv4", 00:32:13.723 "trsvcid": "$NVMF_PORT", 00:32:13.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:13.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:13.723 "hdgst": ${hdgst:-false}, 00:32:13.723 "ddgst": ${ddgst:-false} 00:32:13.723 }, 00:32:13.723 "method": "bdev_nvme_attach_controller" 00:32:13.723 } 00:32:13.723 EOF 00:32:13.723 )") 00:32:13.723 08:24:44 -- target/dif.sh@72 -- # (( file++ )) 00:32:13.723 08:24:44 -- target/dif.sh@72 -- # (( file <= files )) 00:32:13.723 08:24:44 -- nvmf/common.sh@542 -- # cat 00:32:13.723 08:24:44 -- nvmf/common.sh@544 -- # jq . 00:32:13.723 08:24:44 -- nvmf/common.sh@545 -- # IFS=, 00:32:13.723 08:24:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:13.723 "params": { 00:32:13.723 "name": "Nvme0", 00:32:13.723 "trtype": "tcp", 00:32:13.723 "traddr": "10.0.0.2", 00:32:13.723 "adrfam": "ipv4", 00:32:13.723 "trsvcid": "4420", 00:32:13.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:13.723 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:13.723 "hdgst": false, 00:32:13.723 "ddgst": false 00:32:13.723 }, 00:32:13.723 "method": "bdev_nvme_attach_controller" 00:32:13.723 },{ 00:32:13.723 "params": { 00:32:13.723 "name": "Nvme1", 00:32:13.723 "trtype": "tcp", 00:32:13.723 "traddr": "10.0.0.2", 00:32:13.723 "adrfam": "ipv4", 00:32:13.723 "trsvcid": "4420", 00:32:13.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:13.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:13.723 "hdgst": false, 00:32:13.723 "ddgst": false 00:32:13.723 }, 00:32:13.723 "method": "bdev_nvme_attach_controller" 00:32:13.723 }' 00:32:13.723 08:24:44 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:13.723 08:24:44 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:13.723 08:24:44 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:13.723 08:24:44 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:13.723 08:24:44 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:13.723 08:24:44 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:13.723 08:24:44 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:13.723 08:24:44 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:13.723 08:24:44 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:13.723 08:24:44 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:13.983 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:13.983 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:13.983 fio-3.35 00:32:13.983 Starting 2 threads 00:32:13.983 EAL: No free 2048 kB hugepages reported on node 1 00:32:14.925 [2024-06-11 08:24:45.278280] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:14.925 [2024-06-11 08:24:45.278336] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:25.044 00:32:25.044 filename0: (groupid=0, jobs=1): err= 0: pid=1279033: Tue Jun 11 08:24:55 2024 00:32:25.044 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10022msec) 00:32:25.044 slat (nsec): min=5365, max=29509, avg=6354.10, stdev=1652.93 00:32:25.044 clat (usec): min=40792, max=43000, avg=41055.44, stdev=294.06 00:32:25.044 lat (usec): min=40798, max=43006, avg=41061.80, stdev=294.10 00:32:25.044 clat percentiles (usec): 00:32:25.044 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:25.044 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:25.044 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:32:25.044 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:32:25.044 | 99.99th=[43254] 00:32:25.044 bw ( KiB/s): min= 384, max= 416, per=33.85%, avg=388.80, stdev=11.72, samples=20 00:32:25.044 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:32:25.044 lat (msec) : 50=100.00% 00:32:25.044 cpu : usr=97.44%, sys=2.35%, ctx=16, majf=0, minf=148 00:32:25.044 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:25.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.044 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.044 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:25.044 filename1: (groupid=0, jobs=1): err= 0: pid=1279034: Tue Jun 11 08:24:55 2024 00:32:25.044 read: IOPS=189, BW=757KiB/s (776kB/s)(7584KiB/10014msec) 00:32:25.044 slat (nsec): min=5366, max=29516, avg=6188.46, stdev=1388.05 00:32:25.044 clat (usec): min=501, max=42992, avg=21109.41, stdev=20155.93 00:32:25.044 lat (usec): min=509, max=42998, avg=21115.60, stdev=20155.85 00:32:25.044 clat percentiles (usec): 00:32:25.044 | 1.00th=[ 635], 5.00th=[ 701], 10.00th=[ 840], 20.00th=[ 898], 00:32:25.044 | 30.00th=[ 922], 40.00th=[ 930], 50.00th=[40633], 60.00th=[41157], 00:32:25.044 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:25.044 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[43254], 00:32:25.044 | 99.99th=[43254] 00:32:25.044 bw ( KiB/s): min= 672, max= 768, per=65.95%, avg=756.80, stdev=28.00, samples=20 00:32:25.044 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:32:25.044 lat (usec) : 750=7.59%, 1000=41.67% 00:32:25.044 lat (msec) : 2=0.53%, 50=50.21% 00:32:25.044 cpu : usr=97.24%, sys=2.54%, ctx=14, majf=0, minf=162 00:32:25.044 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:25.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.044 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.044 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:25.044 00:32:25.044 Run status group 0 (all jobs): 00:32:25.044 READ: bw=1146KiB/s (1174kB/s), 390KiB/s-757KiB/s (399kB/s-776kB/s), io=11.2MiB (11.8MB), run=10014-10022msec 00:32:25.044 08:24:55 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:25.044 08:24:55 -- target/dif.sh@43 -- # local sub 00:32:25.044 08:24:55 -- target/dif.sh@45 -- # for sub in "$@" 00:32:25.044 08:24:55 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:25.044 08:24:55 -- target/dif.sh@36 -- # local sub_id=0 00:32:25.044 08:24:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:25.044 08:24:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:25.044 08:24:55 -- common/autotest_common.sh@10 -- # set +x 00:32:25.044 08:24:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:25.044 08:24:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:25.044 08:24:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:25.044 08:24:55 -- common/autotest_common.sh@10 -- # set +x 00:32:25.044 08:24:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:25.044 08:24:55 -- target/dif.sh@45 -- # for sub in "$@" 00:32:25.044 08:24:55 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:25.044 08:24:55 -- target/dif.sh@36 -- # local sub_id=1 00:32:25.044 08:24:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:25.044 08:24:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:25.044 08:24:55 -- common/autotest_common.sh@10 -- # set +x 00:32:25.044 08:24:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:25.044 08:24:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:25.044 08:24:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:25.044 08:24:55 -- common/autotest_common.sh@10 -- # set +x 00:32:25.044 08:24:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:25.044 00:32:25.044 real 0m11.589s 00:32:25.044 user 0m31.572s 00:32:25.044 sys 0m0.827s 00:32:25.044 08:24:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:25.044 08:24:55 -- common/autotest_common.sh@10 -- # set +x 00:32:25.044 ************************************ 00:32:25.044 END TEST fio_dif_1_multi_subsystems 00:32:25.044 ************************************ 00:32:25.044 08:24:55 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:25.044 08:24:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:25.044 08:24:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:25.044 08:24:55 -- common/autotest_common.sh@10 -- # set +x 00:32:25.044 ************************************ 00:32:25.044 START TEST fio_dif_rand_params 00:32:25.044 ************************************ 00:32:25.044 08:24:55 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:32:25.044 08:24:55 -- target/dif.sh@100 -- # local NULL_DIF 00:32:25.044 08:24:55 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:25.044 08:24:55 -- target/dif.sh@103 -- # NULL_DIF=3 00:32:25.044 08:24:55 -- target/dif.sh@103 -- # bs=128k 00:32:25.044 08:24:55 -- target/dif.sh@103 -- # numjobs=3 00:32:25.044 08:24:55 -- target/dif.sh@103 -- # iodepth=3 00:32:25.044 08:24:55 -- target/dif.sh@103 -- # runtime=5 00:32:25.044 08:24:55 -- target/dif.sh@105 -- # create_subsystems 0 00:32:25.044 08:24:55 -- target/dif.sh@28 -- # local sub 00:32:25.044 08:24:55 -- target/dif.sh@30 -- # for sub in "$@" 00:32:25.044 08:24:55 -- target/dif.sh@31 -- # create_subsystem 0 00:32:25.044 08:24:55 -- target/dif.sh@18 -- # local sub_id=0 00:32:25.044 08:24:55 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:25.044 08:24:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:25.044 08:24:55 -- common/autotest_common.sh@10 -- # set +x 00:32:25.044 bdev_null0 00:32:25.044 08:24:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:25.044 08:24:55 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:25.044 08:24:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:25.044 08:24:55 -- common/autotest_common.sh@10 -- # set +x 00:32:25.044 08:24:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:25.044 08:24:55 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:25.044 08:24:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:25.044 08:24:55 -- common/autotest_common.sh@10 -- # set +x 00:32:25.044 08:24:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:25.044 08:24:55 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:25.044 08:24:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:25.044 08:24:55 -- common/autotest_common.sh@10 -- # set +x 00:32:25.304 [2024-06-11 08:24:55.694271] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:25.304 08:24:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:25.304 08:24:55 -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:25.304 08:24:55 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:25.304 08:24:55 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:25.304 08:24:55 -- nvmf/common.sh@520 -- # config=() 00:32:25.304 08:24:55 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:25.304 08:24:55 -- nvmf/common.sh@520 -- # local subsystem config 00:32:25.304 08:24:55 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:25.304 08:24:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:25.304 08:24:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:25.304 { 00:32:25.304 "params": { 00:32:25.304 "name": "Nvme$subsystem", 00:32:25.304 "trtype": "$TEST_TRANSPORT", 00:32:25.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.304 "adrfam": "ipv4", 00:32:25.304 "trsvcid": "$NVMF_PORT", 00:32:25.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.304 "hdgst": ${hdgst:-false}, 00:32:25.304 "ddgst": ${ddgst:-false} 00:32:25.304 }, 00:32:25.304 "method": "bdev_nvme_attach_controller" 00:32:25.304 } 00:32:25.304 EOF 00:32:25.304 )") 00:32:25.304 08:24:55 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:25.304 08:24:55 -- target/dif.sh@82 -- # gen_fio_conf 00:32:25.304 08:24:55 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:25.304 08:24:55 -- target/dif.sh@54 -- # local file 00:32:25.304 08:24:55 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:25.304 08:24:55 -- target/dif.sh@56 -- # cat 00:32:25.304 08:24:55 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:25.304 08:24:55 -- common/autotest_common.sh@1320 -- # shift 00:32:25.304 08:24:55 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:25.304 08:24:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:25.304 08:24:55 -- nvmf/common.sh@542 -- # cat 00:32:25.304 08:24:55 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:25.304 08:24:55 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:25.304 08:24:55 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:25.304 08:24:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:25.304 08:24:55 -- target/dif.sh@72 -- # (( file <= files )) 00:32:25.304 08:24:55 -- nvmf/common.sh@544 -- # jq . 00:32:25.304 08:24:55 -- nvmf/common.sh@545 -- # IFS=, 00:32:25.304 08:24:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:25.304 "params": { 00:32:25.304 "name": "Nvme0", 00:32:25.304 "trtype": "tcp", 00:32:25.304 "traddr": "10.0.0.2", 00:32:25.304 "adrfam": "ipv4", 00:32:25.304 "trsvcid": "4420", 00:32:25.304 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:25.304 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:25.304 "hdgst": false, 00:32:25.304 "ddgst": false 00:32:25.304 }, 00:32:25.304 "method": "bdev_nvme_attach_controller" 00:32:25.304 }' 00:32:25.304 08:24:55 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:25.304 08:24:55 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:25.304 08:24:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:25.304 08:24:55 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:25.304 08:24:55 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:25.304 08:24:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:25.304 08:24:55 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:25.304 08:24:55 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:25.304 08:24:55 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:25.304 08:24:55 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:25.564 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:25.564 ... 00:32:25.564 fio-3.35 00:32:25.564 Starting 3 threads 00:32:25.564 EAL: No free 2048 kB hugepages reported on node 1 00:32:26.136 [2024-06-11 08:24:56.550362] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:26.136 [2024-06-11 08:24:56.550407] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:31.423 00:32:31.423 filename0: (groupid=0, jobs=1): err= 0: pid=1281274: Tue Jun 11 08:25:01 2024 00:32:31.423 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(130MiB/5047msec) 00:32:31.423 slat (usec): min=5, max=117, avg= 8.54, stdev= 3.89 00:32:31.423 clat (usec): min=4277, max=95383, avg=14560.28, stdev=13737.97 00:32:31.423 lat (usec): min=4285, max=95391, avg=14568.82, stdev=13738.06 00:32:31.423 clat percentiles (usec): 00:32:31.423 | 1.00th=[ 5669], 5.00th=[ 7308], 10.00th=[ 7963], 20.00th=[ 8979], 00:32:31.423 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10552], 60.00th=[11207], 00:32:31.423 | 70.00th=[11731], 80.00th=[12780], 90.00th=[15401], 95.00th=[51643], 00:32:31.423 | 99.00th=[56361], 99.50th=[90702], 99.90th=[93848], 99.95th=[94897], 00:32:31.423 | 99.99th=[94897] 00:32:31.423 bw ( KiB/s): min=19200, max=34560, per=28.30%, avg=26444.80, stdev=5318.19, samples=10 00:32:31.423 iops : min= 150, max= 270, avg=206.60, stdev=41.55, samples=10 00:32:31.423 lat (msec) : 10=36.97%, 20=53.96%, 50=0.97%, 100=8.11% 00:32:31.424 cpu : usr=96.39%, sys=3.33%, ctx=7, majf=0, minf=143 00:32:31.424 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:31.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.424 issued rwts: total=1036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:31.424 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:31.424 filename0: (groupid=0, jobs=1): err= 0: pid=1281275: Tue Jun 11 08:25:01 2024 00:32:31.424 read: IOPS=283, BW=35.4MiB/s (37.1MB/s)(179MiB/5044msec) 00:32:31.424 slat (nsec): min=5431, max=32090, avg=8225.03, stdev=1956.57 00:32:31.424 clat (usec): min=5223, max=53424, avg=10556.64, stdev=6219.19 00:32:31.424 lat (usec): min=5231, max=53433, avg=10564.87, stdev=6219.25 00:32:31.424 clat percentiles (usec): 00:32:31.424 | 1.00th=[ 5866], 5.00th=[ 6652], 10.00th=[ 7046], 20.00th=[ 7767], 00:32:31.424 | 30.00th=[ 8356], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[10290], 00:32:31.424 | 70.00th=[11076], 80.00th=[11731], 90.00th=[12649], 95.00th=[13435], 00:32:31.424 | 99.00th=[49546], 99.50th=[50594], 99.90th=[53216], 99.95th=[53216], 00:32:31.424 | 99.99th=[53216] 00:32:31.424 bw ( KiB/s): min=28928, max=43008, per=39.07%, avg=36505.60, stdev=4152.85, samples=10 00:32:31.424 iops : min= 226, max= 336, avg=285.20, stdev=32.44, samples=10 00:32:31.424 lat (msec) : 10=55.39%, 20=42.37%, 50=1.47%, 100=0.77% 00:32:31.424 cpu : usr=96.77%, sys=2.97%, ctx=12, majf=0, minf=110 00:32:31.424 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:31.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.424 issued rwts: total=1428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:31.424 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:31.424 filename0: (groupid=0, jobs=1): err= 0: pid=1281276: Tue Jun 11 08:25:01 2024 00:32:31.424 read: IOPS=243, BW=30.5MiB/s (32.0MB/s)(153MiB/5003msec) 00:32:31.424 slat (nsec): min=5589, max=60838, avg=9063.58, stdev=2463.64 00:32:31.424 clat (usec): min=5010, max=92233, avg=12289.84, stdev=8320.10 00:32:31.424 lat (usec): min=5018, max=92245, avg=12298.90, stdev=8320.22 00:32:31.424 clat percentiles (usec): 00:32:31.424 | 1.00th=[ 5866], 5.00th=[ 7242], 10.00th=[ 7832], 20.00th=[ 8848], 00:32:31.424 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[10683], 60.00th=[11469], 00:32:31.424 | 70.00th=[12256], 80.00th=[13173], 90.00th=[14353], 95.00th=[15664], 00:32:31.424 | 99.00th=[53740], 99.50th=[55313], 99.90th=[88605], 99.95th=[91751], 00:32:31.424 | 99.99th=[91751] 00:32:31.424 bw ( KiB/s): min=23552, max=38144, per=33.37%, avg=31180.80, stdev=4542.43, samples=10 00:32:31.424 iops : min= 184, max= 298, avg=243.60, stdev=35.49, samples=10 00:32:31.424 lat (msec) : 10=37.95%, 20=58.52%, 50=1.07%, 100=2.46% 00:32:31.424 cpu : usr=96.64%, sys=3.08%, ctx=9, majf=0, minf=107 00:32:31.424 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:31.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.424 issued rwts: total=1220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:31.424 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:31.424 00:32:31.424 Run status group 0 (all jobs): 00:32:31.424 READ: bw=91.2MiB/s (95.7MB/s), 25.7MiB/s-35.4MiB/s (26.9MB/s-37.1MB/s), io=461MiB (483MB), run=5003-5047msec 00:32:31.424 08:25:01 -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:31.424 08:25:01 -- target/dif.sh@43 -- # local sub 00:32:31.424 08:25:01 -- target/dif.sh@45 -- # for sub in "$@" 00:32:31.424 08:25:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:31.424 08:25:01 -- target/dif.sh@36 -- # local sub_id=0 00:32:31.424 08:25:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:31.424 08:25:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.424 08:25:01 -- common/autotest_common.sh@10 -- # set +x 00:32:31.424 08:25:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.424 08:25:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:31.424 08:25:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.424 08:25:01 -- common/autotest_common.sh@10 -- # set +x 00:32:31.424 08:25:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.424 08:25:01 -- target/dif.sh@109 -- # NULL_DIF=2 00:32:31.424 08:25:01 -- target/dif.sh@109 -- # bs=4k 00:32:31.424 08:25:01 -- target/dif.sh@109 -- # numjobs=8 00:32:31.424 08:25:01 -- target/dif.sh@109 -- # iodepth=16 00:32:31.424 08:25:01 -- target/dif.sh@109 -- # runtime= 00:32:31.424 08:25:01 -- target/dif.sh@109 -- # files=2 00:32:31.424 08:25:01 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:31.424 08:25:01 -- target/dif.sh@28 -- # local sub 00:32:31.424 08:25:01 -- target/dif.sh@30 -- # for sub in "$@" 00:32:31.424 08:25:01 -- target/dif.sh@31 -- # create_subsystem 0 00:32:31.424 08:25:01 -- target/dif.sh@18 -- # local sub_id=0 00:32:31.424 08:25:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:31.424 08:25:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.424 08:25:01 -- common/autotest_common.sh@10 -- # set +x 00:32:31.424 bdev_null0 00:32:31.424 08:25:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.424 08:25:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:31.424 08:25:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.424 08:25:01 -- common/autotest_common.sh@10 -- # set +x 00:32:31.424 08:25:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.424 08:25:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:31.424 08:25:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.424 08:25:01 -- common/autotest_common.sh@10 -- # set +x 00:32:31.424 08:25:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.424 08:25:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:31.424 08:25:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.424 08:25:01 -- common/autotest_common.sh@10 -- # set +x 00:32:31.424 [2024-06-11 08:25:01.908693] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.424 08:25:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.424 08:25:01 -- target/dif.sh@30 -- # for sub in "$@" 00:32:31.424 08:25:01 -- target/dif.sh@31 -- # create_subsystem 1 00:32:31.424 08:25:01 -- target/dif.sh@18 -- # local sub_id=1 00:32:31.424 08:25:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:31.424 08:25:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.424 08:25:01 -- common/autotest_common.sh@10 -- # set +x 00:32:31.424 bdev_null1 00:32:31.424 08:25:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.424 08:25:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:31.424 08:25:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.424 08:25:01 -- common/autotest_common.sh@10 -- # set +x 00:32:31.424 08:25:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.424 08:25:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:31.424 08:25:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.424 08:25:01 -- common/autotest_common.sh@10 -- # set +x 00:32:31.424 08:25:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.424 08:25:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:31.424 08:25:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.424 08:25:01 -- common/autotest_common.sh@10 -- # set +x 00:32:31.424 08:25:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.424 08:25:01 -- target/dif.sh@30 -- # for sub in "$@" 00:32:31.424 08:25:01 -- target/dif.sh@31 -- # create_subsystem 2 00:32:31.424 08:25:01 -- target/dif.sh@18 -- # local sub_id=2 00:32:31.424 08:25:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:31.424 08:25:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.424 08:25:01 -- common/autotest_common.sh@10 -- # set +x 00:32:31.424 bdev_null2 00:32:31.424 08:25:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.424 08:25:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:31.424 08:25:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.424 08:25:01 -- common/autotest_common.sh@10 -- # set +x 00:32:31.424 08:25:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.424 08:25:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:31.424 08:25:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.424 08:25:01 -- common/autotest_common.sh@10 -- # set +x 00:32:31.424 08:25:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.424 08:25:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:31.424 08:25:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:31.424 08:25:01 -- common/autotest_common.sh@10 -- # set +x 00:32:31.424 08:25:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:31.424 08:25:02 -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:31.424 08:25:02 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:31.424 08:25:02 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:31.424 08:25:02 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:31.424 08:25:02 -- nvmf/common.sh@520 -- # config=() 00:32:31.424 08:25:02 -- nvmf/common.sh@520 -- # local subsystem config 00:32:31.424 08:25:02 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:31.424 08:25:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:31.424 08:25:02 -- target/dif.sh@82 -- # gen_fio_conf 00:32:31.424 08:25:02 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:31.424 08:25:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:31.424 { 00:32:31.424 "params": { 00:32:31.424 "name": "Nvme$subsystem", 00:32:31.424 "trtype": "$TEST_TRANSPORT", 00:32:31.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:31.424 "adrfam": "ipv4", 00:32:31.424 "trsvcid": "$NVMF_PORT", 00:32:31.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:31.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:31.424 "hdgst": ${hdgst:-false}, 00:32:31.425 "ddgst": ${ddgst:-false} 00:32:31.425 }, 00:32:31.425 "method": "bdev_nvme_attach_controller" 00:32:31.425 } 00:32:31.425 EOF 00:32:31.425 )") 00:32:31.425 08:25:02 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:31.425 08:25:02 -- target/dif.sh@54 -- # local file 00:32:31.425 08:25:02 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:31.425 08:25:02 -- target/dif.sh@56 -- # cat 00:32:31.425 08:25:02 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:31.425 08:25:02 -- common/autotest_common.sh@1320 -- # shift 00:32:31.425 08:25:02 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:31.425 08:25:02 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:31.425 08:25:02 -- nvmf/common.sh@542 -- # cat 00:32:31.425 08:25:02 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:31.425 08:25:02 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:31.425 08:25:02 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:31.425 08:25:02 -- target/dif.sh@72 -- # (( file <= files )) 00:32:31.425 08:25:02 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:31.425 08:25:02 -- target/dif.sh@73 -- # cat 00:32:31.425 08:25:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:31.425 08:25:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:31.425 { 00:32:31.425 "params": { 00:32:31.425 "name": "Nvme$subsystem", 00:32:31.425 "trtype": "$TEST_TRANSPORT", 00:32:31.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:31.425 "adrfam": "ipv4", 00:32:31.425 "trsvcid": "$NVMF_PORT", 00:32:31.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:31.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:31.425 "hdgst": ${hdgst:-false}, 00:32:31.425 "ddgst": ${ddgst:-false} 00:32:31.425 }, 00:32:31.425 "method": "bdev_nvme_attach_controller" 00:32:31.425 } 00:32:31.425 EOF 00:32:31.425 )") 00:32:31.425 08:25:02 -- target/dif.sh@72 -- # (( file++ )) 00:32:31.425 08:25:02 -- target/dif.sh@72 -- # (( file <= files )) 00:32:31.425 08:25:02 -- target/dif.sh@73 -- # cat 00:32:31.425 08:25:02 -- nvmf/common.sh@542 -- # cat 00:32:31.425 08:25:02 -- target/dif.sh@72 -- # (( file++ )) 00:32:31.425 08:25:02 -- target/dif.sh@72 -- # (( file <= files )) 00:32:31.425 08:25:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:31.425 08:25:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:31.425 { 00:32:31.425 "params": { 00:32:31.425 "name": "Nvme$subsystem", 00:32:31.425 "trtype": "$TEST_TRANSPORT", 00:32:31.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:31.425 "adrfam": "ipv4", 00:32:31.425 "trsvcid": "$NVMF_PORT", 00:32:31.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:31.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:31.425 "hdgst": ${hdgst:-false}, 00:32:31.425 "ddgst": ${ddgst:-false} 00:32:31.425 }, 00:32:31.425 "method": "bdev_nvme_attach_controller" 00:32:31.425 } 00:32:31.425 EOF 00:32:31.425 )") 00:32:31.425 08:25:02 -- nvmf/common.sh@542 -- # cat 00:32:31.425 08:25:02 -- nvmf/common.sh@544 -- # jq . 00:32:31.425 08:25:02 -- nvmf/common.sh@545 -- # IFS=, 00:32:31.425 08:25:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:31.425 "params": { 00:32:31.425 "name": "Nvme0", 00:32:31.425 "trtype": "tcp", 00:32:31.425 "traddr": "10.0.0.2", 00:32:31.425 "adrfam": "ipv4", 00:32:31.425 "trsvcid": "4420", 00:32:31.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:31.425 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:31.425 "hdgst": false, 00:32:31.425 "ddgst": false 00:32:31.425 }, 00:32:31.425 "method": "bdev_nvme_attach_controller" 00:32:31.425 },{ 00:32:31.425 "params": { 00:32:31.425 "name": "Nvme1", 00:32:31.425 "trtype": "tcp", 00:32:31.425 "traddr": "10.0.0.2", 00:32:31.425 "adrfam": "ipv4", 00:32:31.425 "trsvcid": "4420", 00:32:31.425 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:31.425 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:31.425 "hdgst": false, 00:32:31.425 "ddgst": false 00:32:31.425 }, 00:32:31.425 "method": "bdev_nvme_attach_controller" 00:32:31.425 },{ 00:32:31.425 "params": { 00:32:31.425 "name": "Nvme2", 00:32:31.425 "trtype": "tcp", 00:32:31.425 "traddr": "10.0.0.2", 00:32:31.425 "adrfam": "ipv4", 00:32:31.425 "trsvcid": "4420", 00:32:31.425 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:31.425 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:31.425 "hdgst": false, 00:32:31.425 "ddgst": false 00:32:31.425 }, 00:32:31.425 "method": "bdev_nvme_attach_controller" 00:32:31.425 }' 00:32:31.425 08:25:02 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:31.425 08:25:02 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:31.425 08:25:02 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:31.425 08:25:02 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:31.425 08:25:02 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:31.425 08:25:02 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:31.708 08:25:02 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:31.708 08:25:02 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:31.708 08:25:02 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:31.708 08:25:02 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:31.971 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:31.971 ... 00:32:31.971 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:31.971 ... 00:32:31.971 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:31.971 ... 00:32:31.971 fio-3.35 00:32:31.971 Starting 24 threads 00:32:31.971 EAL: No free 2048 kB hugepages reported on node 1 00:32:32.908 [2024-06-11 08:25:03.189619] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:32.908 [2024-06-11 08:25:03.189665] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:42.915 00:32:42.915 filename0: (groupid=0, jobs=1): err= 0: pid=1282777: Tue Jun 11 08:25:13 2024 00:32:42.915 read: IOPS=538, BW=2153KiB/s (2204kB/s)(21.1MiB/10019msec) 00:32:42.915 slat (usec): min=5, max=136, avg=13.05, stdev=14.89 00:32:42.915 clat (usec): min=1054, max=32996, avg=29630.80, stdev=5467.64 00:32:42.915 lat (usec): min=1065, max=33005, avg=29643.85, stdev=5466.89 00:32:42.915 clat percentiles (usec): 00:32:42.915 | 1.00th=[ 1582], 5.00th=[25297], 10.00th=[30016], 20.00th=[30278], 00:32:42.915 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[30802], 00:32:42.915 | 70.00th=[31065], 80.00th=[31327], 90.00th=[31327], 95.00th=[31589], 00:32:42.915 | 99.00th=[32113], 99.50th=[32375], 99.90th=[32900], 99.95th=[32900], 00:32:42.915 | 99.99th=[32900] 00:32:42.915 bw ( KiB/s): min= 2048, max= 3456, per=4.28%, avg=2150.40, stdev=312.43, samples=20 00:32:42.915 iops : min= 512, max= 864, avg=537.60, stdev=78.11, samples=20 00:32:42.915 lat (msec) : 2=2.41%, 4=0.85%, 10=0.30%, 20=0.43%, 50=96.01% 00:32:42.915 cpu : usr=99.14%, sys=0.51%, ctx=67, majf=0, minf=24 00:32:42.915 IO depths : 1=6.0%, 2=12.0%, 4=24.3%, 8=51.1%, 16=6.6%, 32=0.0%, >=64=0.0% 00:32:42.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.915 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.915 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.915 filename0: (groupid=0, jobs=1): err= 0: pid=1282778: Tue Jun 11 08:25:13 2024 00:32:42.915 read: IOPS=517, BW=2070KiB/s (2120kB/s)(20.2MiB/10018msec) 00:32:42.915 slat (usec): min=5, max=107, avg=20.90, stdev=19.47 00:32:42.915 clat (usec): min=26714, max=47256, avg=30758.84, stdev=992.01 00:32:42.915 lat (usec): min=26721, max=47277, avg=30779.74, stdev=988.98 00:32:42.915 clat percentiles (usec): 00:32:42.915 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:32:42.915 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[30802], 00:32:42.915 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:32:42.915 | 99.00th=[32113], 99.50th=[32375], 99.90th=[45351], 99.95th=[45351], 00:32:42.915 | 99.99th=[47449] 00:32:42.915 bw ( KiB/s): min= 1920, max= 2176, per=4.12%, avg=2066.70, stdev=62.28, samples=20 00:32:42.915 iops : min= 480, max= 544, avg=516.60, stdev=15.52, samples=20 00:32:42.915 lat (msec) : 50=100.00% 00:32:42.915 cpu : usr=99.15%, sys=0.51%, ctx=64, majf=0, minf=24 00:32:42.915 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:42.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.915 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.915 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.915 filename0: (groupid=0, jobs=1): err= 0: pid=1282779: Tue Jun 11 08:25:13 2024 00:32:42.915 read: IOPS=518, BW=2073KiB/s (2123kB/s)(20.2MiB/10003msec) 00:32:42.915 slat (nsec): min=5421, max=68272, avg=20378.91, stdev=12607.04 00:32:42.915 clat (usec): min=10307, max=49901, avg=30685.63, stdev=1652.40 00:32:42.915 lat (usec): min=10328, max=49916, avg=30706.01, stdev=1652.19 00:32:42.915 clat percentiles (usec): 00:32:42.915 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:32:42.915 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:32:42.915 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:32:42.915 | 99.00th=[32113], 99.50th=[32375], 99.90th=[50070], 99.95th=[50070], 00:32:42.915 | 99.99th=[50070] 00:32:42.915 bw ( KiB/s): min= 1923, max= 2176, per=4.12%, avg=2068.11, stdev=63.34, samples=19 00:32:42.915 iops : min= 480, max= 544, avg=516.95, stdev=15.87, samples=19 00:32:42.915 lat (msec) : 20=0.31%, 50=99.69% 00:32:42.915 cpu : usr=98.79%, sys=0.72%, ctx=127, majf=0, minf=18 00:32:42.915 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:42.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.915 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.915 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.915 filename0: (groupid=0, jobs=1): err= 0: pid=1282780: Tue Jun 11 08:25:13 2024 00:32:42.915 read: IOPS=517, BW=2072KiB/s (2121kB/s)(20.2MiB/10002msec) 00:32:42.915 slat (usec): min=5, max=119, avg=28.51, stdev=20.01 00:32:42.915 clat (usec): min=18992, max=44130, avg=30651.86, stdev=1691.22 00:32:42.915 lat (usec): min=18998, max=44141, avg=30680.38, stdev=1691.65 00:32:42.915 clat percentiles (usec): 00:32:42.915 | 1.00th=[25297], 5.00th=[29492], 10.00th=[30016], 20.00th=[30278], 00:32:42.915 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:32:42.915 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31327], 95.00th=[31851], 00:32:42.915 | 99.00th=[39584], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:32:42.915 | 99.99th=[44303] 00:32:42.915 bw ( KiB/s): min= 2016, max= 2176, per=4.12%, avg=2066.53, stdev=50.11, samples=19 00:32:42.915 iops : min= 504, max= 544, avg=516.63, stdev=12.53, samples=19 00:32:42.915 lat (msec) : 20=0.12%, 50=99.88% 00:32:42.915 cpu : usr=98.42%, sys=0.85%, ctx=204, majf=0, minf=18 00:32:42.916 IO depths : 1=3.9%, 2=8.1%, 4=21.8%, 8=57.3%, 16=8.9%, 32=0.0%, >=64=0.0% 00:32:42.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.916 complete : 0=0.0%, 4=93.9%, 8=0.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.916 issued rwts: total=5180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.916 filename0: (groupid=0, jobs=1): err= 0: pid=1282781: Tue Jun 11 08:25:13 2024 00:32:42.916 read: IOPS=526, BW=2105KiB/s (2156kB/s)(20.6MiB/10021msec) 00:32:42.916 slat (usec): min=5, max=104, avg=22.31, stdev=19.05 00:32:42.916 clat (usec): min=9949, max=56319, avg=30206.83, stdev=4281.39 00:32:42.916 lat (usec): min=9965, max=56345, avg=30229.15, stdev=4283.45 00:32:42.916 clat percentiles (usec): 00:32:42.916 | 1.00th=[16581], 5.00th=[22152], 10.00th=[24511], 20.00th=[29754], 00:32:42.916 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:32:42.916 | 70.00th=[31065], 80.00th=[31327], 90.00th=[31851], 95.00th=[36439], 00:32:42.916 | 99.00th=[46400], 99.50th=[47449], 99.90th=[51119], 99.95th=[56361], 00:32:42.916 | 99.99th=[56361] 00:32:42.916 bw ( KiB/s): min= 1840, max= 2304, per=4.19%, avg=2102.95, stdev=111.20, samples=20 00:32:42.916 iops : min= 460, max= 576, avg=525.70, stdev=27.74, samples=20 00:32:42.916 lat (msec) : 10=0.06%, 20=2.60%, 50=97.00%, 100=0.34% 00:32:42.916 cpu : usr=99.06%, sys=0.64%, ctx=68, majf=0, minf=28 00:32:42.916 IO depths : 1=3.2%, 2=6.4%, 4=14.7%, 8=65.0%, 16=10.7%, 32=0.0%, >=64=0.0% 00:32:42.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.916 complete : 0=0.0%, 4=91.5%, 8=4.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.916 issued rwts: total=5274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.916 filename0: (groupid=0, jobs=1): err= 0: pid=1282782: Tue Jun 11 08:25:13 2024 00:32:42.916 read: IOPS=518, BW=2072KiB/s (2122kB/s)(20.2MiB/10007msec) 00:32:42.916 slat (nsec): min=5523, max=39383, avg=8160.38, stdev=3380.23 00:32:42.916 clat (usec): min=24494, max=44646, avg=30811.63, stdev=1057.78 00:32:42.916 lat (usec): min=24503, max=44664, avg=30819.79, stdev=1057.74 00:32:42.916 clat percentiles (usec): 00:32:42.916 | 1.00th=[28967], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:32:42.916 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[31065], 00:32:42.916 | 70.00th=[31065], 80.00th=[31327], 90.00th=[31589], 95.00th=[31851], 00:32:42.916 | 99.00th=[32637], 99.50th=[32637], 99.90th=[44827], 99.95th=[44827], 00:32:42.916 | 99.99th=[44827] 00:32:42.916 bw ( KiB/s): min= 1920, max= 2176, per=4.12%, avg=2067.95, stdev=63.73, samples=19 00:32:42.916 iops : min= 480, max= 544, avg=516.95, stdev=15.87, samples=19 00:32:42.916 lat (msec) : 50=100.00% 00:32:42.916 cpu : usr=99.35%, sys=0.39%, ctx=9, majf=0, minf=36 00:32:42.916 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:42.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.916 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.916 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.916 filename0: (groupid=0, jobs=1): err= 0: pid=1282783: Tue Jun 11 08:25:13 2024 00:32:42.916 read: IOPS=520, BW=2082KiB/s (2132kB/s)(20.4MiB/10023msec) 00:32:42.916 slat (usec): min=5, max=120, avg=12.58, stdev=12.57 00:32:42.916 clat (usec): min=13810, max=32659, avg=30643.30, stdev=1342.66 00:32:42.916 lat (usec): min=13816, max=32666, avg=30655.88, stdev=1341.37 00:32:42.916 clat percentiles (usec): 00:32:42.916 | 1.00th=[24249], 5.00th=[30016], 10.00th=[30278], 20.00th=[30278], 00:32:42.916 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[30802], 00:32:42.916 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:32:42.916 | 99.00th=[31851], 99.50th=[32375], 99.90th=[32637], 99.95th=[32637], 00:32:42.916 | 99.99th=[32637] 00:32:42.916 bw ( KiB/s): min= 2048, max= 2176, per=4.14%, avg=2080.00, stdev=56.87, samples=20 00:32:42.916 iops : min= 512, max= 544, avg=520.00, stdev=14.22, samples=20 00:32:42.916 lat (msec) : 20=0.31%, 50=99.69% 00:32:42.916 cpu : usr=99.17%, sys=0.46%, ctx=113, majf=0, minf=27 00:32:42.916 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:42.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.916 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.916 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.916 filename0: (groupid=0, jobs=1): err= 0: pid=1282784: Tue Jun 11 08:25:13 2024 00:32:42.916 read: IOPS=518, BW=2073KiB/s (2123kB/s)(20.2MiB/10004msec) 00:32:42.916 slat (usec): min=5, max=111, avg=29.91, stdev=18.87 00:32:42.916 clat (usec): min=9540, max=49943, avg=30586.90, stdev=1612.12 00:32:42.916 lat (usec): min=9546, max=49958, avg=30616.81, stdev=1612.05 00:32:42.916 clat percentiles (usec): 00:32:42.916 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:32:42.916 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:32:42.916 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31327], 00:32:42.916 | 99.00th=[32113], 99.50th=[32113], 99.90th=[50070], 99.95th=[50070], 00:32:42.916 | 99.99th=[50070] 00:32:42.916 bw ( KiB/s): min= 1920, max= 2176, per=4.12%, avg=2067.95, stdev=63.73, samples=19 00:32:42.916 iops : min= 480, max= 544, avg=516.95, stdev=15.87, samples=19 00:32:42.916 lat (msec) : 10=0.14%, 20=0.17%, 50=99.69% 00:32:42.916 cpu : usr=99.17%, sys=0.49%, ctx=62, majf=0, minf=25 00:32:42.916 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:42.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.916 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.916 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.916 filename1: (groupid=0, jobs=1): err= 0: pid=1282785: Tue Jun 11 08:25:13 2024 00:32:42.916 read: IOPS=518, BW=2075KiB/s (2125kB/s)(20.3MiB/10024msec) 00:32:42.916 slat (usec): min=5, max=125, avg=31.39, stdev=21.01 00:32:42.916 clat (usec): min=21549, max=34835, avg=30560.20, stdev=807.76 00:32:42.916 lat (usec): min=21560, max=34841, avg=30591.59, stdev=806.51 00:32:42.916 clat percentiles (usec): 00:32:42.916 | 1.00th=[28443], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:32:42.916 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:32:42.916 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:32:42.916 | 99.00th=[32113], 99.50th=[32900], 99.90th=[33817], 99.95th=[33817], 00:32:42.916 | 99.99th=[34866] 00:32:42.916 bw ( KiB/s): min= 2048, max= 2176, per=4.13%, avg=2073.60, stdev=50.70, samples=20 00:32:42.916 iops : min= 512, max= 544, avg=518.40, stdev=12.68, samples=20 00:32:42.917 lat (msec) : 50=100.00% 00:32:42.917 cpu : usr=99.22%, sys=0.50%, ctx=7, majf=0, minf=31 00:32:42.917 IO depths : 1=5.1%, 2=11.3%, 4=24.9%, 8=51.3%, 16=7.4%, 32=0.0%, >=64=0.0% 00:32:42.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.917 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.917 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.917 filename1: (groupid=0, jobs=1): err= 0: pid=1282786: Tue Jun 11 08:25:13 2024 00:32:42.917 read: IOPS=517, BW=2070KiB/s (2120kB/s)(20.2MiB/10018msec) 00:32:42.917 slat (usec): min=5, max=126, avg=32.22, stdev=20.32 00:32:42.917 clat (usec): min=27299, max=46065, avg=30619.80, stdev=1011.58 00:32:42.917 lat (usec): min=27305, max=46082, avg=30652.02, stdev=1010.08 00:32:42.917 clat percentiles (usec): 00:32:42.917 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:32:42.917 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:32:42.917 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31327], 00:32:42.917 | 99.00th=[32113], 99.50th=[32113], 99.90th=[45876], 99.95th=[45876], 00:32:42.917 | 99.99th=[45876] 00:32:42.917 bw ( KiB/s): min= 1916, max= 2176, per=4.12%, avg=2066.50, stdev=62.78, samples=20 00:32:42.917 iops : min= 479, max= 544, avg=516.55, stdev=15.65, samples=20 00:32:42.917 lat (msec) : 50=100.00% 00:32:42.917 cpu : usr=99.37%, sys=0.37%, ctx=10, majf=0, minf=22 00:32:42.917 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:42.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.917 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.917 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.917 filename1: (groupid=0, jobs=1): err= 0: pid=1282787: Tue Jun 11 08:25:13 2024 00:32:42.917 read: IOPS=517, BW=2071KiB/s (2121kB/s)(20.2MiB/10013msec) 00:32:42.917 slat (usec): min=5, max=110, avg=26.28, stdev=17.68 00:32:42.917 clat (usec): min=14658, max=53188, avg=30672.81, stdev=2275.31 00:32:42.917 lat (usec): min=14664, max=53205, avg=30699.08, stdev=2275.10 00:32:42.917 clat percentiles (usec): 00:32:42.917 | 1.00th=[19792], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:32:42.917 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:32:42.917 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31851], 00:32:42.917 | 99.00th=[40633], 99.50th=[43779], 99.90th=[52167], 99.95th=[52691], 00:32:42.917 | 99.99th=[53216] 00:32:42.917 bw ( KiB/s): min= 1920, max= 2176, per=4.12%, avg=2066.70, stdev=61.30, samples=20 00:32:42.917 iops : min= 480, max= 544, avg=516.60, stdev=15.36, samples=20 00:32:42.917 lat (msec) : 20=1.00%, 50=98.73%, 100=0.27% 00:32:42.917 cpu : usr=98.99%, sys=0.70%, ctx=62, majf=0, minf=21 00:32:42.917 IO depths : 1=5.2%, 2=11.2%, 4=24.4%, 8=51.9%, 16=7.4%, 32=0.0%, >=64=0.0% 00:32:42.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.917 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.917 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.917 filename1: (groupid=0, jobs=1): err= 0: pid=1282788: Tue Jun 11 08:25:13 2024 00:32:42.917 read: IOPS=519, BW=2078KiB/s (2128kB/s)(20.3MiB/10009msec) 00:32:42.917 slat (nsec): min=5513, max=74698, avg=15250.80, stdev=11216.14 00:32:42.917 clat (usec): min=10900, max=49585, avg=30676.71, stdev=1998.83 00:32:42.917 lat (usec): min=10905, max=49623, avg=30691.96, stdev=1999.44 00:32:42.917 clat percentiles (usec): 00:32:42.917 | 1.00th=[20841], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:32:42.917 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[30802], 00:32:42.917 | 70.00th=[31065], 80.00th=[31327], 90.00th=[31327], 95.00th=[31589], 00:32:42.917 | 99.00th=[32375], 99.50th=[42206], 99.90th=[47449], 99.95th=[49021], 00:32:42.917 | 99.99th=[49546] 00:32:42.917 bw ( KiB/s): min= 2043, max= 2176, per=4.12%, avg=2067.95, stdev=48.08, samples=19 00:32:42.917 iops : min= 510, max= 544, avg=516.95, stdev=12.04, samples=19 00:32:42.917 lat (msec) : 20=0.88%, 50=99.12% 00:32:42.917 cpu : usr=98.98%, sys=0.58%, ctx=46, majf=0, minf=25 00:32:42.917 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:32:42.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.917 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.917 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.917 filename1: (groupid=0, jobs=1): err= 0: pid=1282789: Tue Jun 11 08:25:13 2024 00:32:42.917 read: IOPS=520, BW=2081KiB/s (2131kB/s)(20.4MiB/10024msec) 00:32:42.917 slat (usec): min=5, max=108, avg=23.29, stdev=18.79 00:32:42.917 clat (usec): min=3468, max=32296, avg=30566.46, stdev=1681.30 00:32:42.917 lat (usec): min=3483, max=32304, avg=30589.75, stdev=1680.50 00:32:42.917 clat percentiles (usec): 00:32:42.917 | 1.00th=[28705], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:32:42.917 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:32:42.917 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:32:42.917 | 99.00th=[32113], 99.50th=[32113], 99.90th=[32113], 99.95th=[32375], 00:32:42.917 | 99.99th=[32375] 00:32:42.917 bw ( KiB/s): min= 2048, max= 2176, per=4.14%, avg=2080.00, stdev=56.87, samples=20 00:32:42.917 iops : min= 512, max= 544, avg=520.00, stdev=14.22, samples=20 00:32:42.917 lat (msec) : 4=0.31%, 50=99.69% 00:32:42.917 cpu : usr=99.10%, sys=0.62%, ctx=17, majf=0, minf=28 00:32:42.917 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:42.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.917 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.917 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.917 filename1: (groupid=0, jobs=1): err= 0: pid=1282790: Tue Jun 11 08:25:13 2024 00:32:42.917 read: IOPS=518, BW=2073KiB/s (2123kB/s)(20.2MiB/10004msec) 00:32:42.917 slat (nsec): min=5529, max=64079, avg=18018.18, stdev=10963.32 00:32:42.917 clat (usec): min=10496, max=50183, avg=30721.08, stdev=1671.68 00:32:42.917 lat (usec): min=10505, max=50198, avg=30739.10, stdev=1671.37 00:32:42.917 clat percentiles (usec): 00:32:42.917 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30016], 20.00th=[30278], 00:32:42.917 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:32:42.917 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:32:42.917 | 99.00th=[32113], 99.50th=[32637], 99.90th=[50070], 99.95th=[50070], 00:32:42.918 | 99.99th=[50070] 00:32:42.918 bw ( KiB/s): min= 1923, max= 2176, per=4.12%, avg=2068.11, stdev=63.34, samples=19 00:32:42.918 iops : min= 480, max= 544, avg=516.95, stdev=15.87, samples=19 00:32:42.918 lat (msec) : 20=0.35%, 50=99.34%, 100=0.31% 00:32:42.918 cpu : usr=99.42%, sys=0.31%, ctx=9, majf=0, minf=25 00:32:42.918 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:42.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.918 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.918 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.918 filename1: (groupid=0, jobs=1): err= 0: pid=1282791: Tue Jun 11 08:25:13 2024 00:32:42.918 read: IOPS=518, BW=2075KiB/s (2125kB/s)(20.3MiB/10023msec) 00:32:42.918 slat (usec): min=5, max=107, avg=22.57, stdev=16.70 00:32:42.918 clat (usec): min=19485, max=34136, avg=30664.40, stdev=766.47 00:32:42.918 lat (usec): min=19496, max=34156, avg=30686.97, stdev=765.32 00:32:42.918 clat percentiles (usec): 00:32:42.918 | 1.00th=[28705], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:32:42.918 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:32:42.918 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:32:42.918 | 99.00th=[32375], 99.50th=[32375], 99.90th=[33817], 99.95th=[33817], 00:32:42.918 | 99.99th=[34341] 00:32:42.918 bw ( KiB/s): min= 2048, max= 2176, per=4.13%, avg=2073.80, stdev=52.44, samples=20 00:32:42.918 iops : min= 512, max= 544, avg=518.45, stdev=13.11, samples=20 00:32:42.918 lat (msec) : 20=0.04%, 50=99.96% 00:32:42.918 cpu : usr=99.09%, sys=0.61%, ctx=26, majf=0, minf=20 00:32:42.918 IO depths : 1=5.2%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:32:42.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.918 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.918 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.918 filename1: (groupid=0, jobs=1): err= 0: pid=1282792: Tue Jun 11 08:25:13 2024 00:32:42.918 read: IOPS=555, BW=2222KiB/s (2276kB/s)(21.7MiB/10004msec) 00:32:42.918 slat (nsec): min=5370, max=74347, avg=16379.19, stdev=12680.97 00:32:42.918 clat (usec): min=7173, max=74081, avg=28658.06, stdev=5140.36 00:32:42.918 lat (usec): min=7179, max=74099, avg=28674.44, stdev=5144.14 00:32:42.918 clat percentiles (usec): 00:32:42.918 | 1.00th=[12911], 5.00th=[16057], 10.00th=[19530], 20.00th=[27132], 00:32:42.918 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:32:42.918 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:32:42.918 | 99.00th=[36963], 99.50th=[46924], 99.90th=[54264], 99.95th=[54264], 00:32:42.918 | 99.99th=[73925] 00:32:42.918 bw ( KiB/s): min= 2048, max= 2944, per=4.43%, avg=2225.63, stdev=265.86, samples=19 00:32:42.918 iops : min= 512, max= 736, avg=556.37, stdev=66.47, samples=19 00:32:42.918 lat (msec) : 10=0.22%, 20=11.62%, 50=87.73%, 100=0.43% 00:32:42.918 cpu : usr=99.24%, sys=0.50%, ctx=20, majf=0, minf=25 00:32:42.918 IO depths : 1=3.4%, 2=7.0%, 4=15.7%, 8=63.5%, 16=10.5%, 32=0.0%, >=64=0.0% 00:32:42.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.918 complete : 0=0.0%, 4=91.9%, 8=3.6%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.918 issued rwts: total=5558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.918 filename2: (groupid=0, jobs=1): err= 0: pid=1282793: Tue Jun 11 08:25:13 2024 00:32:42.918 read: IOPS=517, BW=2069KiB/s (2118kB/s)(20.2MiB/10004msec) 00:32:42.918 slat (usec): min=5, max=108, avg=20.76, stdev=18.03 00:32:42.918 clat (usec): min=7677, max=74056, avg=30850.73, stdev=2323.48 00:32:42.918 lat (usec): min=7683, max=74071, avg=30871.49, stdev=2323.43 00:32:42.918 clat percentiles (usec): 00:32:42.918 | 1.00th=[27395], 5.00th=[30016], 10.00th=[30278], 20.00th=[30278], 00:32:42.918 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30802], 60.00th=[31065], 00:32:42.918 | 70.00th=[31065], 80.00th=[31327], 90.00th=[31589], 95.00th=[31851], 00:32:42.918 | 99.00th=[35390], 99.50th=[44827], 99.90th=[55313], 99.95th=[55313], 00:32:42.918 | 99.99th=[73925] 00:32:42.918 bw ( KiB/s): min= 1836, max= 2152, per=4.10%, avg=2059.58, stdev=63.90, samples=19 00:32:42.918 iops : min= 459, max= 538, avg=514.89, stdev=15.98, samples=19 00:32:42.918 lat (msec) : 10=0.06%, 20=0.44%, 50=99.30%, 100=0.19% 00:32:42.918 cpu : usr=99.28%, sys=0.46%, ctx=10, majf=0, minf=27 00:32:42.918 IO depths : 1=0.1%, 2=0.1%, 4=1.6%, 8=79.9%, 16=18.3%, 32=0.0%, >=64=0.0% 00:32:42.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.918 complete : 0=0.0%, 4=89.8%, 8=9.9%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.918 issued rwts: total=5174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.918 filename2: (groupid=0, jobs=1): err= 0: pid=1282794: Tue Jun 11 08:25:13 2024 00:32:42.918 read: IOPS=519, BW=2080KiB/s (2130kB/s)(20.3MiB/10008msec) 00:32:42.918 slat (usec): min=5, max=105, avg=27.38, stdev=20.66 00:32:42.918 clat (usec): min=10745, max=52395, avg=30504.18, stdev=2879.45 00:32:42.918 lat (usec): min=10751, max=52410, avg=30531.56, stdev=2880.26 00:32:42.918 clat percentiles (usec): 00:32:42.918 | 1.00th=[17957], 5.00th=[29492], 10.00th=[29754], 20.00th=[30016], 00:32:42.918 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30540], 60.00th=[30802], 00:32:42.918 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31851], 00:32:42.918 | 99.00th=[41157], 99.50th=[46400], 99.90th=[52167], 99.95th=[52167], 00:32:42.918 | 99.99th=[52167] 00:32:42.918 bw ( KiB/s): min= 1920, max= 2240, per=4.12%, avg=2070.11, stdev=81.22, samples=19 00:32:42.918 iops : min= 480, max= 560, avg=517.53, stdev=20.30, samples=19 00:32:42.918 lat (msec) : 20=1.46%, 50=98.04%, 100=0.50% 00:32:42.918 cpu : usr=98.14%, sys=1.09%, ctx=697, majf=0, minf=26 00:32:42.918 IO depths : 1=5.5%, 2=11.0%, 4=22.6%, 8=53.5%, 16=7.4%, 32=0.0%, >=64=0.0% 00:32:42.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.918 complete : 0=0.0%, 4=93.5%, 8=1.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.918 issued rwts: total=5204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.918 filename2: (groupid=0, jobs=1): err= 0: pid=1282795: Tue Jun 11 08:25:13 2024 00:32:42.918 read: IOPS=517, BW=2070KiB/s (2119kB/s)(20.2MiB/10019msec) 00:32:42.918 slat (usec): min=5, max=130, avg=32.85, stdev=20.52 00:32:42.918 clat (usec): min=28295, max=47193, avg=30637.70, stdev=1064.97 00:32:42.918 lat (usec): min=28302, max=47214, avg=30670.56, stdev=1062.65 00:32:42.918 clat percentiles (usec): 00:32:42.918 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:32:42.918 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:32:42.918 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31589], 00:32:42.918 | 99.00th=[32113], 99.50th=[32113], 99.90th=[46924], 99.95th=[46924], 00:32:42.919 | 99.99th=[47449] 00:32:42.919 bw ( KiB/s): min= 1920, max= 2176, per=4.12%, avg=2066.95, stdev=62.73, samples=20 00:32:42.919 iops : min= 480, max= 544, avg=516.70, stdev=15.70, samples=20 00:32:42.919 lat (msec) : 50=100.00% 00:32:42.919 cpu : usr=98.71%, sys=0.79%, ctx=169, majf=0, minf=25 00:32:42.919 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:42.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.919 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.919 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.919 filename2: (groupid=0, jobs=1): err= 0: pid=1282796: Tue Jun 11 08:25:13 2024 00:32:42.919 read: IOPS=528, BW=2114KiB/s (2164kB/s)(20.7MiB/10026msec) 00:32:42.919 slat (usec): min=5, max=115, avg=23.36, stdev=20.57 00:32:42.919 clat (usec): min=10792, max=56158, avg=30091.28, stdev=5328.70 00:32:42.919 lat (usec): min=10800, max=56170, avg=30114.64, stdev=5331.09 00:32:42.919 clat percentiles (usec): 00:32:42.919 | 1.00th=[15664], 5.00th=[21365], 10.00th=[23725], 20.00th=[28443], 00:32:42.919 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30540], 60.00th=[30802], 00:32:42.919 | 70.00th=[31065], 80.00th=[31327], 90.00th=[33424], 95.00th=[39584], 00:32:42.919 | 99.00th=[50594], 99.50th=[52167], 99.90th=[55837], 99.95th=[56361], 00:32:42.919 | 99.99th=[56361] 00:32:42.919 bw ( KiB/s): min= 2016, max= 2320, per=4.21%, avg=2112.80, stdev=74.95, samples=20 00:32:42.919 iops : min= 504, max= 580, avg=528.20, stdev=18.74, samples=20 00:32:42.919 lat (msec) : 20=3.81%, 50=94.98%, 100=1.21% 00:32:42.919 cpu : usr=99.32%, sys=0.41%, ctx=11, majf=0, minf=29 00:32:42.919 IO depths : 1=2.9%, 2=5.9%, 4=14.5%, 8=66.1%, 16=10.5%, 32=0.0%, >=64=0.0% 00:32:42.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.919 complete : 0=0.0%, 4=91.4%, 8=3.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.919 issued rwts: total=5298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.919 filename2: (groupid=0, jobs=1): err= 0: pid=1282797: Tue Jun 11 08:25:13 2024 00:32:42.919 read: IOPS=517, BW=2070KiB/s (2119kB/s)(20.2MiB/10011msec) 00:32:42.919 slat (usec): min=5, max=133, avg=21.16, stdev=20.57 00:32:42.919 clat (usec): min=9463, max=50666, avg=30762.30, stdev=1856.67 00:32:42.919 lat (usec): min=9471, max=50678, avg=30783.46, stdev=1856.40 00:32:42.919 clat percentiles (usec): 00:32:42.919 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:32:42.919 | 30.00th=[30540], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:32:42.919 | 70.00th=[31065], 80.00th=[31065], 90.00th=[31589], 95.00th=[31851], 00:32:42.919 | 99.00th=[32637], 99.50th=[35914], 99.90th=[50594], 99.95th=[50594], 00:32:42.919 | 99.99th=[50594] 00:32:42.919 bw ( KiB/s): min= 2048, max= 2096, per=4.12%, avg=2066.53, stdev=22.79, samples=19 00:32:42.919 iops : min= 512, max= 524, avg=516.63, stdev= 5.70, samples=19 00:32:42.919 lat (msec) : 10=0.08%, 20=0.31%, 50=99.34%, 100=0.27% 00:32:42.919 cpu : usr=99.19%, sys=0.56%, ctx=9, majf=0, minf=28 00:32:42.919 IO depths : 1=1.6%, 2=3.3%, 4=7.1%, 8=72.6%, 16=15.4%, 32=0.0%, >=64=0.0% 00:32:42.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.919 complete : 0=0.0%, 4=90.7%, 8=7.6%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.919 issued rwts: total=5180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.919 filename2: (groupid=0, jobs=1): err= 0: pid=1282798: Tue Jun 11 08:25:13 2024 00:32:42.919 read: IOPS=517, BW=2070KiB/s (2120kB/s)(20.2MiB/10018msec) 00:32:42.919 slat (usec): min=5, max=114, avg=31.86, stdev=19.51 00:32:42.919 clat (usec): min=27692, max=45850, avg=30631.08, stdev=1002.16 00:32:42.919 lat (usec): min=27698, max=45870, avg=30662.94, stdev=1000.38 00:32:42.919 clat percentiles (usec): 00:32:42.919 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:32:42.919 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:32:42.919 | 70.00th=[30802], 80.00th=[31065], 90.00th=[31327], 95.00th=[31327], 00:32:42.919 | 99.00th=[32113], 99.50th=[32375], 99.90th=[45876], 99.95th=[45876], 00:32:42.919 | 99.99th=[45876] 00:32:42.919 bw ( KiB/s): min= 1920, max= 2176, per=4.12%, avg=2066.70, stdev=62.28, samples=20 00:32:42.919 iops : min= 480, max= 544, avg=516.60, stdev=15.52, samples=20 00:32:42.919 lat (msec) : 50=100.00% 00:32:42.919 cpu : usr=98.52%, sys=0.80%, ctx=188, majf=0, minf=29 00:32:42.919 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:42.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.919 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.919 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.919 filename2: (groupid=0, jobs=1): err= 0: pid=1282799: Tue Jun 11 08:25:13 2024 00:32:42.919 read: IOPS=569, BW=2277KiB/s (2331kB/s)(22.3MiB/10025msec) 00:32:42.919 slat (usec): min=5, max=119, avg=18.03, stdev=18.39 00:32:42.919 clat (usec): min=8884, max=49616, avg=27948.53, stdev=4926.37 00:32:42.919 lat (usec): min=8890, max=49622, avg=27966.56, stdev=4932.08 00:32:42.919 clat percentiles (usec): 00:32:42.919 | 1.00th=[15795], 5.00th=[16581], 10.00th=[20055], 20.00th=[23725], 00:32:42.919 | 30.00th=[29754], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:32:42.919 | 70.00th=[30540], 80.00th=[31065], 90.00th=[31065], 95.00th=[31327], 00:32:42.919 | 99.00th=[32375], 99.50th=[33817], 99.90th=[49546], 99.95th=[49546], 00:32:42.919 | 99.99th=[49546] 00:32:42.919 bw ( KiB/s): min= 2048, max= 3072, per=4.54%, avg=2281.60, stdev=367.34, samples=20 00:32:42.919 iops : min= 512, max= 768, avg=570.40, stdev=91.83, samples=20 00:32:42.919 lat (msec) : 10=0.21%, 20=10.02%, 50=89.77% 00:32:42.919 cpu : usr=98.40%, sys=0.93%, ctx=142, majf=0, minf=16 00:32:42.919 IO depths : 1=4.1%, 2=8.6%, 4=19.5%, 8=59.4%, 16=8.4%, 32=0.0%, >=64=0.0% 00:32:42.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.919 complete : 0=0.0%, 4=92.5%, 8=1.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.919 issued rwts: total=5706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.920 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.920 filename2: (groupid=0, jobs=1): err= 0: pid=1282800: Tue Jun 11 08:25:13 2024 00:32:42.920 read: IOPS=516, BW=2067KiB/s (2117kB/s)(20.2MiB/10003msec) 00:32:42.920 slat (usec): min=5, max=118, avg=21.83, stdev=18.73 00:32:42.920 clat (usec): min=4114, max=55992, avg=30773.63, stdev=5123.51 00:32:42.920 lat (usec): min=4120, max=56032, avg=30795.46, stdev=5124.09 00:32:42.920 clat percentiles (usec): 00:32:42.920 | 1.00th=[15795], 5.00th=[22414], 10.00th=[27919], 20.00th=[30016], 00:32:42.920 | 30.00th=[30278], 40.00th=[30540], 50.00th=[30540], 60.00th=[30802], 00:32:42.920 | 70.00th=[31065], 80.00th=[31327], 90.00th=[32900], 95.00th=[39584], 00:32:42.920 | 99.00th=[50070], 99.50th=[53740], 99.90th=[55837], 99.95th=[55837], 00:32:42.920 | 99.99th=[55837] 00:32:42.920 bw ( KiB/s): min= 1836, max= 2240, per=4.10%, avg=2060.42, stdev=106.94, samples=19 00:32:42.920 iops : min= 459, max= 560, avg=515.11, stdev=26.74, samples=19 00:32:42.920 lat (msec) : 10=0.15%, 20=2.69%, 50=96.03%, 100=1.12% 00:32:42.920 cpu : usr=98.94%, sys=0.63%, ctx=101, majf=0, minf=23 00:32:42.920 IO depths : 1=3.0%, 2=7.2%, 4=18.0%, 8=61.4%, 16=10.3%, 32=0.0%, >=64=0.0% 00:32:42.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.920 complete : 0=0.0%, 4=92.4%, 8=2.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.920 issued rwts: total=5170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.920 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:42.920 00:32:42.920 Run status group 0 (all jobs): 00:32:42.920 READ: bw=49.0MiB/s (51.4MB/s), 2067KiB/s-2277KiB/s (2117kB/s-2331kB/s), io=492MiB (515MB), run=10002-10026msec 00:32:42.920 08:25:13 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:42.920 08:25:13 -- target/dif.sh@43 -- # local sub 00:32:42.920 08:25:13 -- target/dif.sh@45 -- # for sub in "$@" 00:32:42.920 08:25:13 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:42.920 08:25:13 -- target/dif.sh@36 -- # local sub_id=0 00:32:42.920 08:25:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:42.920 08:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:42.920 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:32:42.920 08:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:42.920 08:25:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:42.920 08:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:42.920 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:32:42.920 08:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:42.920 08:25:13 -- target/dif.sh@45 -- # for sub in "$@" 00:32:42.920 08:25:13 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:42.920 08:25:13 -- target/dif.sh@36 -- # local sub_id=1 00:32:42.920 08:25:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:42.920 08:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:42.920 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:32:43.182 08:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.183 08:25:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:43.183 08:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.183 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:32:43.183 08:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.183 08:25:13 -- target/dif.sh@45 -- # for sub in "$@" 00:32:43.183 08:25:13 -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:43.183 08:25:13 -- target/dif.sh@36 -- # local sub_id=2 00:32:43.183 08:25:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:43.183 08:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.183 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:32:43.183 08:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.183 08:25:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:43.183 08:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.183 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:32:43.183 08:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.183 08:25:13 -- target/dif.sh@115 -- # NULL_DIF=1 00:32:43.183 08:25:13 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:43.183 08:25:13 -- target/dif.sh@115 -- # numjobs=2 00:32:43.183 08:25:13 -- target/dif.sh@115 -- # iodepth=8 00:32:43.183 08:25:13 -- target/dif.sh@115 -- # runtime=5 00:32:43.183 08:25:13 -- target/dif.sh@115 -- # files=1 00:32:43.183 08:25:13 -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:43.183 08:25:13 -- target/dif.sh@28 -- # local sub 00:32:43.183 08:25:13 -- target/dif.sh@30 -- # for sub in "$@" 00:32:43.183 08:25:13 -- target/dif.sh@31 -- # create_subsystem 0 00:32:43.183 08:25:13 -- target/dif.sh@18 -- # local sub_id=0 00:32:43.183 08:25:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:43.183 08:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.183 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:32:43.183 bdev_null0 00:32:43.183 08:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.183 08:25:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:43.183 08:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.183 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:32:43.183 08:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.183 08:25:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:43.183 08:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.183 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:32:43.183 08:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.183 08:25:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:43.183 08:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.183 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:32:43.183 [2024-06-11 08:25:13.637861] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.183 08:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.183 08:25:13 -- target/dif.sh@30 -- # for sub in "$@" 00:32:43.183 08:25:13 -- target/dif.sh@31 -- # create_subsystem 1 00:32:43.183 08:25:13 -- target/dif.sh@18 -- # local sub_id=1 00:32:43.183 08:25:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:43.183 08:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.183 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:32:43.183 bdev_null1 00:32:43.183 08:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.183 08:25:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:43.183 08:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.183 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:32:43.183 08:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.183 08:25:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:43.183 08:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.183 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:32:43.183 08:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.183 08:25:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:43.183 08:25:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:43.183 08:25:13 -- common/autotest_common.sh@10 -- # set +x 00:32:43.183 08:25:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:43.183 08:25:13 -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:43.183 08:25:13 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:43.183 08:25:13 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:43.183 08:25:13 -- nvmf/common.sh@520 -- # config=() 00:32:43.183 08:25:13 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:43.183 08:25:13 -- nvmf/common.sh@520 -- # local subsystem config 00:32:43.183 08:25:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:43.183 08:25:13 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:43.183 08:25:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:43.183 { 00:32:43.183 "params": { 00:32:43.183 "name": "Nvme$subsystem", 00:32:43.183 "trtype": "$TEST_TRANSPORT", 00:32:43.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:43.183 "adrfam": "ipv4", 00:32:43.183 "trsvcid": "$NVMF_PORT", 00:32:43.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:43.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:43.183 "hdgst": ${hdgst:-false}, 00:32:43.183 "ddgst": ${ddgst:-false} 00:32:43.183 }, 00:32:43.183 "method": "bdev_nvme_attach_controller" 00:32:43.183 } 00:32:43.183 EOF 00:32:43.183 )") 00:32:43.183 08:25:13 -- target/dif.sh@82 -- # gen_fio_conf 00:32:43.183 08:25:13 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:43.183 08:25:13 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:43.183 08:25:13 -- target/dif.sh@54 -- # local file 00:32:43.183 08:25:13 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:43.183 08:25:13 -- target/dif.sh@56 -- # cat 00:32:43.183 08:25:13 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:43.183 08:25:13 -- common/autotest_common.sh@1320 -- # shift 00:32:43.183 08:25:13 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:43.183 08:25:13 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:43.183 08:25:13 -- nvmf/common.sh@542 -- # cat 00:32:43.183 08:25:13 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:43.184 08:25:13 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:43.184 08:25:13 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:43.184 08:25:13 -- target/dif.sh@72 -- # (( file <= files )) 00:32:43.184 08:25:13 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:43.184 08:25:13 -- target/dif.sh@73 -- # cat 00:32:43.184 08:25:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:43.184 08:25:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:43.184 { 00:32:43.184 "params": { 00:32:43.184 "name": "Nvme$subsystem", 00:32:43.184 "trtype": "$TEST_TRANSPORT", 00:32:43.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:43.184 "adrfam": "ipv4", 00:32:43.184 "trsvcid": "$NVMF_PORT", 00:32:43.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:43.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:43.184 "hdgst": ${hdgst:-false}, 00:32:43.184 "ddgst": ${ddgst:-false} 00:32:43.184 }, 00:32:43.184 "method": "bdev_nvme_attach_controller" 00:32:43.184 } 00:32:43.184 EOF 00:32:43.184 )") 00:32:43.184 08:25:13 -- target/dif.sh@72 -- # (( file++ )) 00:32:43.184 08:25:13 -- target/dif.sh@72 -- # (( file <= files )) 00:32:43.184 08:25:13 -- nvmf/common.sh@542 -- # cat 00:32:43.184 08:25:13 -- nvmf/common.sh@544 -- # jq . 00:32:43.184 08:25:13 -- nvmf/common.sh@545 -- # IFS=, 00:32:43.184 08:25:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:43.184 "params": { 00:32:43.184 "name": "Nvme0", 00:32:43.184 "trtype": "tcp", 00:32:43.184 "traddr": "10.0.0.2", 00:32:43.184 "adrfam": "ipv4", 00:32:43.184 "trsvcid": "4420", 00:32:43.184 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:43.184 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:43.184 "hdgst": false, 00:32:43.184 "ddgst": false 00:32:43.184 }, 00:32:43.184 "method": "bdev_nvme_attach_controller" 00:32:43.184 },{ 00:32:43.184 "params": { 00:32:43.184 "name": "Nvme1", 00:32:43.184 "trtype": "tcp", 00:32:43.184 "traddr": "10.0.0.2", 00:32:43.184 "adrfam": "ipv4", 00:32:43.184 "trsvcid": "4420", 00:32:43.184 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:43.184 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:43.184 "hdgst": false, 00:32:43.184 "ddgst": false 00:32:43.184 }, 00:32:43.184 "method": "bdev_nvme_attach_controller" 00:32:43.184 }' 00:32:43.184 08:25:13 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:43.184 08:25:13 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:43.184 08:25:13 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:43.184 08:25:13 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:43.184 08:25:13 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:43.184 08:25:13 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:43.184 08:25:13 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:43.184 08:25:13 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:43.184 08:25:13 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:43.184 08:25:13 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:43.754 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:43.754 ... 00:32:43.754 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:43.754 ... 00:32:43.754 fio-3.35 00:32:43.754 Starting 4 threads 00:32:43.754 EAL: No free 2048 kB hugepages reported on node 1 00:32:44.328 [2024-06-11 08:25:14.816987] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:44.328 [2024-06-11 08:25:14.817033] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:49.619 00:32:49.619 filename0: (groupid=0, jobs=1): err= 0: pid=1285332: Tue Jun 11 08:25:19 2024 00:32:49.619 read: IOPS=2208, BW=17.3MiB/s (18.1MB/s)(86.3MiB/5003msec) 00:32:49.619 slat (nsec): min=5334, max=67775, avg=7725.74, stdev=3530.18 00:32:49.619 clat (usec): min=1770, max=6160, avg=3604.24, stdev=513.55 00:32:49.619 lat (usec): min=1793, max=6165, avg=3611.97, stdev=513.33 00:32:49.619 clat percentiles (usec): 00:32:49.619 | 1.00th=[ 2638], 5.00th=[ 3032], 10.00th=[ 3163], 20.00th=[ 3326], 00:32:49.619 | 30.00th=[ 3392], 40.00th=[ 3458], 50.00th=[ 3523], 60.00th=[ 3621], 00:32:49.619 | 70.00th=[ 3654], 80.00th=[ 3687], 90.00th=[ 4015], 95.00th=[ 5014], 00:32:49.619 | 99.00th=[ 5407], 99.50th=[ 5538], 99.90th=[ 5800], 99.95th=[ 6128], 00:32:49.619 | 99.99th=[ 6128] 00:32:49.619 bw ( KiB/s): min=17328, max=18000, per=25.33%, avg=17685.33, stdev=256.37, samples=9 00:32:49.619 iops : min= 2166, max= 2250, avg=2210.67, stdev=32.05, samples=9 00:32:49.619 lat (msec) : 2=0.06%, 4=89.65%, 10=10.28% 00:32:49.619 cpu : usr=96.92%, sys=2.82%, ctx=7, majf=0, minf=9 00:32:49.619 IO depths : 1=0.1%, 2=0.1%, 4=67.9%, 8=32.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:49.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.619 complete : 0=0.0%, 4=96.3%, 8=3.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.619 issued rwts: total=11048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.619 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:49.619 filename0: (groupid=0, jobs=1): err= 0: pid=1285333: Tue Jun 11 08:25:19 2024 00:32:49.619 read: IOPS=2211, BW=17.3MiB/s (18.1MB/s)(86.4MiB/5001msec) 00:32:49.619 slat (nsec): min=5335, max=55609, avg=7662.70, stdev=3491.25 00:32:49.619 clat (usec): min=1595, max=6204, avg=3597.23, stdev=599.81 00:32:49.619 lat (usec): min=1601, max=6209, avg=3604.89, stdev=599.71 00:32:49.619 clat percentiles (usec): 00:32:49.619 | 1.00th=[ 2540], 5.00th=[ 2868], 10.00th=[ 3032], 20.00th=[ 3228], 00:32:49.619 | 30.00th=[ 3326], 40.00th=[ 3425], 50.00th=[ 3490], 60.00th=[ 3556], 00:32:49.619 | 70.00th=[ 3654], 80.00th=[ 3687], 90.00th=[ 4621], 95.00th=[ 5014], 00:32:49.619 | 99.00th=[ 5407], 99.50th=[ 5538], 99.90th=[ 5800], 99.95th=[ 6063], 00:32:49.619 | 99.99th=[ 6194] 00:32:49.619 bw ( KiB/s): min=17232, max=18016, per=25.31%, avg=17669.33, stdev=297.08, samples=9 00:32:49.619 iops : min= 2154, max= 2252, avg=2208.67, stdev=37.13, samples=9 00:32:49.619 lat (msec) : 2=0.05%, 4=86.41%, 10=13.54% 00:32:49.619 cpu : usr=97.18%, sys=2.56%, ctx=9, majf=0, minf=9 00:32:49.619 IO depths : 1=0.1%, 2=0.4%, 4=70.2%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:49.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.619 complete : 0=0.0%, 4=94.3%, 8=5.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.619 issued rwts: total=11062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.619 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:49.619 filename1: (groupid=0, jobs=1): err= 0: pid=1285334: Tue Jun 11 08:25:19 2024 00:32:49.619 read: IOPS=2161, BW=16.9MiB/s (17.7MB/s)(84.5MiB/5002msec) 00:32:49.619 slat (nsec): min=5335, max=84136, avg=7702.95, stdev=3734.10 00:32:49.619 clat (usec): min=1842, max=6284, avg=3680.42, stdev=598.03 00:32:49.619 lat (usec): min=1859, max=6290, avg=3688.12, stdev=597.86 00:32:49.619 clat percentiles (usec): 00:32:49.619 | 1.00th=[ 2769], 5.00th=[ 3097], 10.00th=[ 3228], 20.00th=[ 3326], 00:32:49.619 | 30.00th=[ 3392], 40.00th=[ 3458], 50.00th=[ 3556], 60.00th=[ 3621], 00:32:49.619 | 70.00th=[ 3654], 80.00th=[ 3785], 90.00th=[ 4948], 95.00th=[ 5211], 00:32:49.619 | 99.00th=[ 5538], 99.50th=[ 5735], 99.90th=[ 5997], 99.95th=[ 6128], 00:32:49.619 | 99.99th=[ 6259] 00:32:49.619 bw ( KiB/s): min=16800, max=17744, per=24.77%, avg=17292.44, stdev=318.27, samples=9 00:32:49.619 iops : min= 2100, max= 2218, avg=2161.56, stdev=39.78, samples=9 00:32:49.619 lat (msec) : 2=0.04%, 4=86.11%, 10=13.85% 00:32:49.619 cpu : usr=97.56%, sys=2.18%, ctx=6, majf=0, minf=9 00:32:49.619 IO depths : 1=0.1%, 2=0.2%, 4=71.3%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:49.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.619 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.619 issued rwts: total=10813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.619 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:49.619 filename1: (groupid=0, jobs=1): err= 0: pid=1285335: Tue Jun 11 08:25:19 2024 00:32:49.619 read: IOPS=2147, BW=16.8MiB/s (17.6MB/s)(83.9MiB/5001msec) 00:32:49.619 slat (nsec): min=5332, max=42853, avg=7343.17, stdev=2743.97 00:32:49.619 clat (usec): min=1398, max=7056, avg=3705.69, stdev=629.73 00:32:49.619 lat (usec): min=1419, max=7089, avg=3713.04, stdev=629.73 00:32:49.619 clat percentiles (usec): 00:32:49.619 | 1.00th=[ 2704], 5.00th=[ 3064], 10.00th=[ 3195], 20.00th=[ 3326], 00:32:49.619 | 30.00th=[ 3392], 40.00th=[ 3490], 50.00th=[ 3556], 60.00th=[ 3621], 00:32:49.619 | 70.00th=[ 3654], 80.00th=[ 3884], 90.00th=[ 5014], 95.00th=[ 5211], 00:32:49.619 | 99.00th=[ 5604], 99.50th=[ 5800], 99.90th=[ 6128], 99.95th=[ 6325], 00:32:49.619 | 99.99th=[ 6783] 00:32:49.619 bw ( KiB/s): min=16912, max=17536, per=24.59%, avg=17171.56, stdev=227.37, samples=9 00:32:49.619 iops : min= 2114, max= 2192, avg=2146.44, stdev=28.42, samples=9 00:32:49.619 lat (msec) : 2=0.08%, 4=85.02%, 10=14.90% 00:32:49.619 cpu : usr=97.54%, sys=2.22%, ctx=7, majf=0, minf=9 00:32:49.619 IO depths : 1=0.1%, 2=0.1%, 4=72.9%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:49.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.619 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:49.619 issued rwts: total=10738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:49.619 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:49.619 00:32:49.619 Run status group 0 (all jobs): 00:32:49.619 READ: bw=68.2MiB/s (71.5MB/s), 16.8MiB/s-17.3MiB/s (17.6MB/s-18.1MB/s), io=341MiB (358MB), run=5001-5003msec 00:32:49.619 08:25:20 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:49.619 08:25:20 -- target/dif.sh@43 -- # local sub 00:32:49.619 08:25:20 -- target/dif.sh@45 -- # for sub in "$@" 00:32:49.619 08:25:20 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:49.619 08:25:20 -- target/dif.sh@36 -- # local sub_id=0 00:32:49.619 08:25:20 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:49.619 08:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:49.619 08:25:20 -- common/autotest_common.sh@10 -- # set +x 00:32:49.619 08:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:49.619 08:25:20 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:49.619 08:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:49.619 08:25:20 -- common/autotest_common.sh@10 -- # set +x 00:32:49.620 08:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:49.620 08:25:20 -- target/dif.sh@45 -- # for sub in "$@" 00:32:49.620 08:25:20 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:49.620 08:25:20 -- target/dif.sh@36 -- # local sub_id=1 00:32:49.620 08:25:20 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:49.620 08:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:49.620 08:25:20 -- common/autotest_common.sh@10 -- # set +x 00:32:49.620 08:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:49.620 08:25:20 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:49.620 08:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:49.620 08:25:20 -- common/autotest_common.sh@10 -- # set +x 00:32:49.620 08:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:49.620 00:32:49.620 real 0m24.485s 00:32:49.620 user 5m14.847s 00:32:49.620 sys 0m3.445s 00:32:49.620 08:25:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:49.620 08:25:20 -- common/autotest_common.sh@10 -- # set +x 00:32:49.620 ************************************ 00:32:49.620 END TEST fio_dif_rand_params 00:32:49.620 ************************************ 00:32:49.620 08:25:20 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:49.620 08:25:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:49.620 08:25:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:49.620 08:25:20 -- common/autotest_common.sh@10 -- # set +x 00:32:49.620 ************************************ 00:32:49.620 START TEST fio_dif_digest 00:32:49.620 ************************************ 00:32:49.620 08:25:20 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:32:49.620 08:25:20 -- target/dif.sh@123 -- # local NULL_DIF 00:32:49.620 08:25:20 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:49.620 08:25:20 -- target/dif.sh@125 -- # local hdgst ddgst 00:32:49.620 08:25:20 -- target/dif.sh@127 -- # NULL_DIF=3 00:32:49.620 08:25:20 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:49.620 08:25:20 -- target/dif.sh@127 -- # numjobs=3 00:32:49.620 08:25:20 -- target/dif.sh@127 -- # iodepth=3 00:32:49.620 08:25:20 -- target/dif.sh@127 -- # runtime=10 00:32:49.620 08:25:20 -- target/dif.sh@128 -- # hdgst=true 00:32:49.620 08:25:20 -- target/dif.sh@128 -- # ddgst=true 00:32:49.620 08:25:20 -- target/dif.sh@130 -- # create_subsystems 0 00:32:49.620 08:25:20 -- target/dif.sh@28 -- # local sub 00:32:49.620 08:25:20 -- target/dif.sh@30 -- # for sub in "$@" 00:32:49.620 08:25:20 -- target/dif.sh@31 -- # create_subsystem 0 00:32:49.620 08:25:20 -- target/dif.sh@18 -- # local sub_id=0 00:32:49.620 08:25:20 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:49.620 08:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:49.620 08:25:20 -- common/autotest_common.sh@10 -- # set +x 00:32:49.620 bdev_null0 00:32:49.620 08:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:49.620 08:25:20 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:49.620 08:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:49.620 08:25:20 -- common/autotest_common.sh@10 -- # set +x 00:32:49.620 08:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:49.620 08:25:20 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:49.620 08:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:49.620 08:25:20 -- common/autotest_common.sh@10 -- # set +x 00:32:49.620 08:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:49.620 08:25:20 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:49.620 08:25:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:49.620 08:25:20 -- common/autotest_common.sh@10 -- # set +x 00:32:49.620 [2024-06-11 08:25:20.226955] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.620 08:25:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:49.620 08:25:20 -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:49.620 08:25:20 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:49.620 08:25:20 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:49.620 08:25:20 -- nvmf/common.sh@520 -- # config=() 00:32:49.620 08:25:20 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:49.620 08:25:20 -- nvmf/common.sh@520 -- # local subsystem config 00:32:49.620 08:25:20 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:49.620 08:25:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:49.620 08:25:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:49.620 { 00:32:49.620 "params": { 00:32:49.620 "name": "Nvme$subsystem", 00:32:49.620 "trtype": "$TEST_TRANSPORT", 00:32:49.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:49.620 "adrfam": "ipv4", 00:32:49.620 "trsvcid": "$NVMF_PORT", 00:32:49.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:49.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:49.620 "hdgst": ${hdgst:-false}, 00:32:49.620 "ddgst": ${ddgst:-false} 00:32:49.620 }, 00:32:49.620 "method": "bdev_nvme_attach_controller" 00:32:49.620 } 00:32:49.620 EOF 00:32:49.620 )") 00:32:49.620 08:25:20 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:49.620 08:25:20 -- target/dif.sh@82 -- # gen_fio_conf 00:32:49.620 08:25:20 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:49.620 08:25:20 -- target/dif.sh@54 -- # local file 00:32:49.620 08:25:20 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:49.620 08:25:20 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:49.620 08:25:20 -- target/dif.sh@56 -- # cat 00:32:49.620 08:25:20 -- common/autotest_common.sh@1320 -- # shift 00:32:49.620 08:25:20 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:49.620 08:25:20 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:49.620 08:25:20 -- nvmf/common.sh@542 -- # cat 00:32:49.620 08:25:20 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:49.620 08:25:20 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:49.620 08:25:20 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:49.620 08:25:20 -- target/dif.sh@72 -- # (( file <= files )) 00:32:49.620 08:25:20 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:49.620 08:25:20 -- nvmf/common.sh@544 -- # jq . 00:32:49.620 08:25:20 -- nvmf/common.sh@545 -- # IFS=, 00:32:49.620 08:25:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:49.620 "params": { 00:32:49.620 "name": "Nvme0", 00:32:49.620 "trtype": "tcp", 00:32:49.620 "traddr": "10.0.0.2", 00:32:49.620 "adrfam": "ipv4", 00:32:49.620 "trsvcid": "4420", 00:32:49.620 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:49.620 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:49.620 "hdgst": true, 00:32:49.620 "ddgst": true 00:32:49.620 }, 00:32:49.620 "method": "bdev_nvme_attach_controller" 00:32:49.620 }' 00:32:49.901 08:25:20 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:49.901 08:25:20 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:49.901 08:25:20 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:49.901 08:25:20 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:49.901 08:25:20 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:49.901 08:25:20 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:49.901 08:25:20 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:49.901 08:25:20 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:49.901 08:25:20 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:49.901 08:25:20 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:50.164 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:50.164 ... 00:32:50.164 fio-3.35 00:32:50.164 Starting 3 threads 00:32:50.164 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.425 [2024-06-11 08:25:20.972458] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:50.425 [2024-06-11 08:25:20.972507] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:02.650 00:33:02.650 filename0: (groupid=0, jobs=1): err= 0: pid=1286544: Tue Jun 11 08:25:31 2024 00:33:02.650 read: IOPS=218, BW=27.4MiB/s (28.7MB/s)(275MiB/10048msec) 00:33:02.650 slat (nsec): min=5716, max=33869, avg=7221.48, stdev=1432.04 00:33:02.650 clat (usec): min=7962, max=56885, avg=13679.79, stdev=3544.79 00:33:02.650 lat (usec): min=7968, max=56891, avg=13687.01, stdev=3544.79 00:33:02.650 clat percentiles (usec): 00:33:02.650 | 1.00th=[ 9110], 5.00th=[10814], 10.00th=[11731], 20.00th=[12387], 00:33:02.650 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13566], 60.00th=[13829], 00:33:02.650 | 70.00th=[14222], 80.00th=[14615], 90.00th=[15139], 95.00th=[15664], 00:33:02.650 | 99.00th=[16909], 99.50th=[53740], 99.90th=[55837], 99.95th=[55837], 00:33:02.650 | 99.99th=[56886] 00:33:02.650 bw ( KiB/s): min=25344, max=29696, per=32.81%, avg=28108.80, stdev=1236.41, samples=20 00:33:02.650 iops : min= 198, max= 232, avg=219.60, stdev= 9.66, samples=20 00:33:02.650 lat (msec) : 10=2.73%, 20=96.63%, 100=0.64% 00:33:02.650 cpu : usr=95.64%, sys=4.12%, ctx=28, majf=0, minf=149 00:33:02.650 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.650 issued rwts: total=2199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.650 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:02.650 filename0: (groupid=0, jobs=1): err= 0: pid=1286545: Tue Jun 11 08:25:31 2024 00:33:02.650 read: IOPS=219, BW=27.5MiB/s (28.8MB/s)(276MiB/10045msec) 00:33:02.650 slat (nsec): min=5699, max=88690, avg=8534.05, stdev=2223.09 00:33:02.650 clat (usec): min=7880, max=56057, avg=13612.42, stdev=4713.23 00:33:02.650 lat (usec): min=7888, max=56064, avg=13620.95, stdev=4713.19 00:33:02.650 clat percentiles (usec): 00:33:02.650 | 1.00th=[ 9241], 5.00th=[11207], 10.00th=[11600], 20.00th=[12125], 00:33:02.650 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:33:02.650 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14746], 95.00th=[15270], 00:33:02.650 | 99.00th=[52167], 99.50th=[53216], 99.90th=[54264], 99.95th=[55313], 00:33:02.650 | 99.99th=[55837] 00:33:02.650 bw ( KiB/s): min=24576, max=30464, per=32.98%, avg=28252.45, stdev=1606.01, samples=20 00:33:02.650 iops : min= 192, max= 238, avg=220.70, stdev=12.54, samples=20 00:33:02.650 lat (msec) : 10=1.86%, 20=96.70%, 50=0.23%, 100=1.22% 00:33:02.650 cpu : usr=95.61%, sys=4.14%, ctx=31, majf=0, minf=154 00:33:02.650 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.650 issued rwts: total=2209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.650 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:02.650 filename0: (groupid=0, jobs=1): err= 0: pid=1286546: Tue Jun 11 08:25:31 2024 00:33:02.650 read: IOPS=230, BW=28.8MiB/s (30.2MB/s)(290MiB/10044msec) 00:33:02.650 slat (nsec): min=5598, max=49955, avg=7538.19, stdev=1798.37 00:33:02.650 clat (usec): min=7304, max=53759, avg=12983.68, stdev=2327.77 00:33:02.650 lat (usec): min=7312, max=53766, avg=12991.22, stdev=2327.81 00:33:02.650 clat percentiles (usec): 00:33:02.650 | 1.00th=[ 8717], 5.00th=[10159], 10.00th=[11207], 20.00th=[11994], 00:33:02.650 | 30.00th=[12387], 40.00th=[12649], 50.00th=[13042], 60.00th=[13304], 00:33:02.650 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14484], 95.00th=[15008], 00:33:02.650 | 99.00th=[16057], 99.50th=[16450], 99.90th=[52691], 99.95th=[53216], 00:33:02.650 | 99.99th=[53740] 00:33:02.650 bw ( KiB/s): min=28416, max=30720, per=34.58%, avg=29619.20, stdev=743.35, samples=20 00:33:02.650 iops : min= 222, max= 240, avg=231.40, stdev= 5.81, samples=20 00:33:02.650 lat (msec) : 10=4.58%, 20=95.21%, 100=0.22% 00:33:02.650 cpu : usr=95.62%, sys=4.15%, ctx=28, majf=0, minf=202 00:33:02.650 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.650 issued rwts: total=2316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.650 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:02.650 00:33:02.650 Run status group 0 (all jobs): 00:33:02.650 READ: bw=83.6MiB/s (87.7MB/s), 27.4MiB/s-28.8MiB/s (28.7MB/s-30.2MB/s), io=841MiB (881MB), run=10044-10048msec 00:33:02.650 08:25:31 -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:02.650 08:25:31 -- target/dif.sh@43 -- # local sub 00:33:02.650 08:25:31 -- target/dif.sh@45 -- # for sub in "$@" 00:33:02.650 08:25:31 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:02.650 08:25:31 -- target/dif.sh@36 -- # local sub_id=0 00:33:02.650 08:25:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:02.650 08:25:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:02.650 08:25:31 -- common/autotest_common.sh@10 -- # set +x 00:33:02.650 08:25:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:02.650 08:25:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:02.650 08:25:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:02.650 08:25:31 -- common/autotest_common.sh@10 -- # set +x 00:33:02.650 08:25:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:02.650 00:33:02.650 real 0m11.111s 00:33:02.650 user 0m40.887s 00:33:02.650 sys 0m1.547s 00:33:02.650 08:25:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:02.650 08:25:31 -- common/autotest_common.sh@10 -- # set +x 00:33:02.650 ************************************ 00:33:02.650 END TEST fio_dif_digest 00:33:02.650 ************************************ 00:33:02.650 08:25:31 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:02.650 08:25:31 -- target/dif.sh@147 -- # nvmftestfini 00:33:02.651 08:25:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:02.651 08:25:31 -- nvmf/common.sh@116 -- # sync 00:33:02.651 08:25:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:02.651 08:25:31 -- nvmf/common.sh@119 -- # set +e 00:33:02.651 08:25:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:02.651 08:25:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:02.651 rmmod nvme_tcp 00:33:02.651 rmmod nvme_fabrics 00:33:02.651 rmmod nvme_keyring 00:33:02.651 08:25:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:02.651 08:25:31 -- nvmf/common.sh@123 -- # set -e 00:33:02.651 08:25:31 -- nvmf/common.sh@124 -- # return 0 00:33:02.651 08:25:31 -- nvmf/common.sh@477 -- # '[' -n 1275986 ']' 00:33:02.651 08:25:31 -- nvmf/common.sh@478 -- # killprocess 1275986 00:33:02.651 08:25:31 -- common/autotest_common.sh@926 -- # '[' -z 1275986 ']' 00:33:02.651 08:25:31 -- common/autotest_common.sh@930 -- # kill -0 1275986 00:33:02.651 08:25:31 -- common/autotest_common.sh@931 -- # uname 00:33:02.651 08:25:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:02.651 08:25:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1275986 00:33:02.651 08:25:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:02.651 08:25:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:02.651 08:25:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1275986' 00:33:02.651 killing process with pid 1275986 00:33:02.651 08:25:31 -- common/autotest_common.sh@945 -- # kill 1275986 00:33:02.651 08:25:31 -- common/autotest_common.sh@950 -- # wait 1275986 00:33:02.651 08:25:31 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:33:02.651 08:25:31 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:04.562 Waiting for block devices as requested 00:33:04.562 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:04.562 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:04.562 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:04.823 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:04.823 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:04.823 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:04.823 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:05.084 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:05.084 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:05.344 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:05.344 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:05.344 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:05.344 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:05.605 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:05.605 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:05.605 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:05.605 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:05.865 08:25:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:05.865 08:25:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:05.865 08:25:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:05.865 08:25:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:05.865 08:25:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.865 08:25:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:05.865 08:25:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.775 08:25:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:07.775 00:33:07.775 real 1m17.087s 00:33:07.775 user 7m50.899s 00:33:07.775 sys 0m18.916s 00:33:07.775 08:25:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:07.775 08:25:38 -- common/autotest_common.sh@10 -- # set +x 00:33:07.775 ************************************ 00:33:07.775 END TEST nvmf_dif 00:33:07.775 ************************************ 00:33:07.775 08:25:38 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:07.775 08:25:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:07.775 08:25:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:07.775 08:25:38 -- common/autotest_common.sh@10 -- # set +x 00:33:07.775 ************************************ 00:33:07.775 START TEST nvmf_abort_qd_sizes 00:33:07.775 ************************************ 00:33:07.775 08:25:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:08.036 * Looking for test storage... 00:33:08.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:08.036 08:25:38 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:08.036 08:25:38 -- nvmf/common.sh@7 -- # uname -s 00:33:08.036 08:25:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:08.036 08:25:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:08.036 08:25:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:08.036 08:25:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:08.036 08:25:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:08.036 08:25:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:08.036 08:25:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:08.036 08:25:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:08.036 08:25:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:08.036 08:25:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:08.036 08:25:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:08.036 08:25:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:08.036 08:25:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:08.036 08:25:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:08.036 08:25:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:08.036 08:25:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.036 08:25:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.036 08:25:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.036 08:25:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.036 08:25:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.036 08:25:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.036 08:25:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.036 08:25:38 -- paths/export.sh@5 -- # export PATH 00:33:08.036 08:25:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.036 08:25:38 -- nvmf/common.sh@46 -- # : 0 00:33:08.036 08:25:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:08.037 08:25:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:08.037 08:25:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:08.037 08:25:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:08.037 08:25:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:08.037 08:25:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:08.037 08:25:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:08.037 08:25:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:08.037 08:25:38 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:33:08.037 08:25:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:08.037 08:25:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:08.037 08:25:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:08.037 08:25:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:08.037 08:25:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:08.037 08:25:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.037 08:25:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:08.037 08:25:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.037 08:25:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:08.037 08:25:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:08.037 08:25:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:08.037 08:25:38 -- common/autotest_common.sh@10 -- # set +x 00:33:14.693 08:25:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:14.693 08:25:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:14.693 08:25:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:14.693 08:25:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:14.693 08:25:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:14.693 08:25:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:14.693 08:25:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:14.693 08:25:45 -- nvmf/common.sh@294 -- # net_devs=() 00:33:14.693 08:25:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:14.693 08:25:45 -- nvmf/common.sh@295 -- # e810=() 00:33:14.693 08:25:45 -- nvmf/common.sh@295 -- # local -ga e810 00:33:14.693 08:25:45 -- nvmf/common.sh@296 -- # x722=() 00:33:14.693 08:25:45 -- nvmf/common.sh@296 -- # local -ga x722 00:33:14.693 08:25:45 -- nvmf/common.sh@297 -- # mlx=() 00:33:14.693 08:25:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:14.693 08:25:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:14.693 08:25:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:14.693 08:25:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:14.693 08:25:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:14.693 08:25:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:14.693 08:25:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:14.693 08:25:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:14.693 08:25:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:14.693 08:25:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:14.693 08:25:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:14.693 08:25:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:14.693 08:25:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:14.693 08:25:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:14.693 08:25:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:14.693 08:25:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:14.693 08:25:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:14.693 08:25:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:14.693 08:25:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:14.693 08:25:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:14.693 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:14.693 08:25:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:14.693 08:25:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:14.693 08:25:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.693 08:25:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.693 08:25:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:14.693 08:25:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:14.693 08:25:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:14.693 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:14.693 08:25:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:14.693 08:25:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:14.693 08:25:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.693 08:25:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.693 08:25:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:14.693 08:25:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:14.693 08:25:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:14.693 08:25:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:14.693 08:25:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:14.693 08:25:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.693 08:25:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:14.693 08:25:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.693 08:25:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:14.693 Found net devices under 0000:31:00.0: cvl_0_0 00:33:14.693 08:25:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.693 08:25:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:14.693 08:25:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.693 08:25:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:14.693 08:25:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.693 08:25:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:14.693 Found net devices under 0000:31:00.1: cvl_0_1 00:33:14.693 08:25:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.693 08:25:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:14.693 08:25:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:14.693 08:25:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:14.693 08:25:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:14.693 08:25:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:14.693 08:25:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:14.693 08:25:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:14.693 08:25:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:14.693 08:25:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:14.693 08:25:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:14.693 08:25:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:14.693 08:25:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:14.693 08:25:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:14.693 08:25:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:14.693 08:25:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:14.693 08:25:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:14.693 08:25:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:14.693 08:25:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:14.954 08:25:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:14.954 08:25:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:14.954 08:25:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:14.954 08:25:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:14.954 08:25:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:14.954 08:25:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:14.954 08:25:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:14.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:14.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:33:14.954 00:33:14.954 --- 10.0.0.2 ping statistics --- 00:33:14.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.954 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:33:14.954 08:25:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:14.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:14.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:33:14.954 00:33:14.954 --- 10.0.0.1 ping statistics --- 00:33:14.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.954 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:33:14.954 08:25:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:14.954 08:25:45 -- nvmf/common.sh@410 -- # return 0 00:33:14.954 08:25:45 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:33:14.954 08:25:45 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:19.160 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:19.160 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:19.160 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:19.160 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:19.160 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:19.160 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:19.160 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:19.160 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:19.160 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:19.160 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:19.160 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:19.160 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:19.160 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:19.160 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:19.160 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:19.160 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:19.160 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:19.160 08:25:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:19.160 08:25:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:19.160 08:25:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:19.160 08:25:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:19.160 08:25:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:19.160 08:25:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:19.160 08:25:49 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:33:19.160 08:25:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:19.160 08:25:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:19.160 08:25:49 -- common/autotest_common.sh@10 -- # set +x 00:33:19.160 08:25:49 -- nvmf/common.sh@469 -- # nvmfpid=1296184 00:33:19.160 08:25:49 -- nvmf/common.sh@470 -- # waitforlisten 1296184 00:33:19.160 08:25:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:19.160 08:25:49 -- common/autotest_common.sh@819 -- # '[' -z 1296184 ']' 00:33:19.160 08:25:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:19.160 08:25:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:19.160 08:25:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:19.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:19.160 08:25:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:19.160 08:25:49 -- common/autotest_common.sh@10 -- # set +x 00:33:19.160 [2024-06-11 08:25:49.418990] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:33:19.160 [2024-06-11 08:25:49.419053] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:19.160 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.160 [2024-06-11 08:25:49.507640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:19.160 [2024-06-11 08:25:49.576189] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:19.160 [2024-06-11 08:25:49.576289] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:19.160 [2024-06-11 08:25:49.576295] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:19.160 [2024-06-11 08:25:49.576301] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:19.160 [2024-06-11 08:25:49.576432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:19.160 [2024-06-11 08:25:49.576584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.160 [2024-06-11 08:25:49.576586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:19.160 [2024-06-11 08:25:49.576454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:19.733 08:25:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:19.734 08:25:50 -- common/autotest_common.sh@852 -- # return 0 00:33:19.734 08:25:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:19.734 08:25:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:19.734 08:25:50 -- common/autotest_common.sh@10 -- # set +x 00:33:19.734 08:25:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:19.734 08:25:50 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:19.734 08:25:50 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:33:19.734 08:25:50 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:33:19.734 08:25:50 -- scripts/common.sh@311 -- # local bdf bdfs 00:33:19.734 08:25:50 -- scripts/common.sh@312 -- # local nvmes 00:33:19.734 08:25:50 -- scripts/common.sh@314 -- # [[ -n 0000:65:00.0 ]] 00:33:19.734 08:25:50 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:19.734 08:25:50 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:33:19.734 08:25:50 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:33:19.734 08:25:50 -- scripts/common.sh@322 -- # uname -s 00:33:19.734 08:25:50 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:33:19.734 08:25:50 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:33:19.734 08:25:50 -- scripts/common.sh@327 -- # (( 1 )) 00:33:19.734 08:25:50 -- scripts/common.sh@328 -- # printf '%s\n' 0000:65:00.0 00:33:19.734 08:25:50 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:33:19.734 08:25:50 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:65:00.0 00:33:19.734 08:25:50 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:33:19.734 08:25:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:19.734 08:25:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:19.734 08:25:50 -- common/autotest_common.sh@10 -- # set +x 00:33:19.734 ************************************ 00:33:19.734 START TEST spdk_target_abort 00:33:19.734 ************************************ 00:33:19.734 08:25:50 -- common/autotest_common.sh@1104 -- # spdk_target 00:33:19.734 08:25:50 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:19.734 08:25:50 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:33:19.734 08:25:50 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:33:19.734 08:25:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.734 08:25:50 -- common/autotest_common.sh@10 -- # set +x 00:33:19.994 spdk_targetn1 00:33:19.994 08:25:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.994 08:25:50 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:19.994 08:25:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.994 08:25:50 -- common/autotest_common.sh@10 -- # set +x 00:33:19.994 [2024-06-11 08:25:50.612609] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:19.994 08:25:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.994 08:25:50 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:33:19.994 08:25:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.994 08:25:50 -- common/autotest_common.sh@10 -- # set +x 00:33:19.994 08:25:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:19.994 08:25:50 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:33:19.994 08:25:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:19.994 08:25:50 -- common/autotest_common.sh@10 -- # set +x 00:33:20.255 08:25:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:20.255 08:25:50 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:33:20.255 08:25:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:20.255 08:25:50 -- common/autotest_common.sh@10 -- # set +x 00:33:20.255 [2024-06-11 08:25:50.652880] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:20.255 08:25:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:20.255 08:25:50 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:33:20.255 08:25:50 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:20.255 08:25:50 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:20.255 08:25:50 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:20.255 08:25:50 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:20.255 08:25:50 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:33:20.255 08:25:50 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:20.255 08:25:50 -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:20.255 08:25:50 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:20.255 08:25:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:20.255 08:25:50 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:20.255 08:25:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:20.255 08:25:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:20.255 08:25:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:20.255 08:25:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:20.255 08:25:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:20.255 08:25:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:20.255 08:25:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:20.255 08:25:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:20.255 08:25:50 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:20.255 08:25:50 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:20.255 EAL: No free 2048 kB hugepages reported on node 1 00:33:20.255 [2024-06-11 08:25:50.890948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1616 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:33:20.255 [2024-06-11 08:25:50.890973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00cb p:1 m:0 dnr:0 00:33:20.516 [2024-06-11 08:25:50.915785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2504 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:33:20.516 [2024-06-11 08:25:50.915804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:20.516 [2024-06-11 08:25:50.930843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3000 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:33:20.516 [2024-06-11 08:25:50.930859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:20.516 [2024-06-11 08:25:50.932191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3096 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:33:20.516 [2024-06-11 08:25:50.932206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0085 p:0 m:0 dnr:0 00:33:23.812 Initializing NVMe Controllers 00:33:23.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:33:23.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:33:23.812 Initialization complete. Launching workers. 00:33:23.813 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 13178, failed: 4 00:33:23.813 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2569, failed to submit 10613 00:33:23.813 success 767, unsuccess 1802, failed 0 00:33:23.813 08:25:53 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:23.813 08:25:53 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:23.813 EAL: No free 2048 kB hugepages reported on node 1 00:33:23.813 [2024-06-11 08:25:54.019687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:688 len:8 PRP1 0x200007c54000 PRP2 0x0 00:33:23.813 [2024-06-11 08:25:54.019722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:33:23.813 [2024-06-11 08:25:54.058471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:1576 len:8 PRP1 0x200007c4a000 PRP2 0x0 00:33:23.813 [2024-06-11 08:25:54.058495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:00c8 p:1 m:0 dnr:0 00:33:23.813 [2024-06-11 08:25:54.114583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:2848 len:8 PRP1 0x200007c3e000 PRP2 0x0 00:33:23.813 [2024-06-11 08:25:54.114608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:23.813 [2024-06-11 08:25:54.138513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:3392 len:8 PRP1 0x200007c4a000 PRP2 0x0 00:33:23.813 [2024-06-11 08:25:54.138535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:00b4 p:0 m:0 dnr:0 00:33:23.813 [2024-06-11 08:25:54.153285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:3776 len:8 PRP1 0x200007c58000 PRP2 0x0 00:33:23.813 [2024-06-11 08:25:54.153306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:00e0 p:0 m:0 dnr:0 00:33:27.111 Initializing NVMe Controllers 00:33:27.111 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:33:27.111 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:33:27.111 Initialization complete. Launching workers. 00:33:27.111 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8501, failed: 5 00:33:27.111 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1249, failed to submit 7257 00:33:27.111 success 355, unsuccess 894, failed 0 00:33:27.111 08:25:57 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:27.111 08:25:57 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:27.111 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.111 [2024-06-11 08:25:57.271369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:183 nsid:1 lba:1848 len:8 PRP1 0x200007914000 PRP2 0x0 00:33:27.111 [2024-06-11 08:25:57.271409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:183 cdw0:0 sqhd:0077 p:1 m:0 dnr:0 00:33:27.371 [2024-06-11 08:25:57.912168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:75104 len:8 PRP1 0x2000078e2000 PRP2 0x0 00:33:27.371 [2024-06-11 08:25:57.912192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0037 p:1 m:0 dnr:0 00:33:29.913 Initializing NVMe Controllers 00:33:29.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:33:29.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:33:29.913 Initialization complete. Launching workers. 00:33:29.913 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 43680, failed: 2 00:33:29.913 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2692, failed to submit 40990 00:33:29.913 success 580, unsuccess 2112, failed 0 00:33:29.913 08:26:00 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:33:29.913 08:26:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:29.913 08:26:00 -- common/autotest_common.sh@10 -- # set +x 00:33:29.913 08:26:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:29.913 08:26:00 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:29.913 08:26:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:29.913 08:26:00 -- common/autotest_common.sh@10 -- # set +x 00:33:31.820 08:26:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:31.820 08:26:02 -- target/abort_qd_sizes.sh@62 -- # killprocess 1296184 00:33:31.820 08:26:02 -- common/autotest_common.sh@926 -- # '[' -z 1296184 ']' 00:33:31.820 08:26:02 -- common/autotest_common.sh@930 -- # kill -0 1296184 00:33:31.820 08:26:02 -- common/autotest_common.sh@931 -- # uname 00:33:31.820 08:26:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:31.820 08:26:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1296184 00:33:31.820 08:26:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:31.820 08:26:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:31.820 08:26:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1296184' 00:33:31.820 killing process with pid 1296184 00:33:31.820 08:26:02 -- common/autotest_common.sh@945 -- # kill 1296184 00:33:31.820 08:26:02 -- common/autotest_common.sh@950 -- # wait 1296184 00:33:31.820 00:33:31.820 real 0m12.022s 00:33:31.820 user 0m49.109s 00:33:31.820 sys 0m1.639s 00:33:31.820 08:26:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:31.820 08:26:02 -- common/autotest_common.sh@10 -- # set +x 00:33:31.820 ************************************ 00:33:31.820 END TEST spdk_target_abort 00:33:31.820 ************************************ 00:33:31.820 08:26:02 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:33:31.820 08:26:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:31.820 08:26:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:31.820 08:26:02 -- common/autotest_common.sh@10 -- # set +x 00:33:31.820 ************************************ 00:33:31.820 START TEST kernel_target_abort 00:33:31.820 ************************************ 00:33:31.820 08:26:02 -- common/autotest_common.sh@1104 -- # kernel_target 00:33:31.820 08:26:02 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:33:31.820 08:26:02 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:33:31.820 08:26:02 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:33:31.820 08:26:02 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:33:31.820 08:26:02 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:33:31.820 08:26:02 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:33:31.820 08:26:02 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:31.820 08:26:02 -- nvmf/common.sh@627 -- # local block nvme 00:33:31.820 08:26:02 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:33:31.820 08:26:02 -- nvmf/common.sh@630 -- # modprobe nvmet 00:33:31.820 08:26:02 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:31.820 08:26:02 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:35.118 Waiting for block devices as requested 00:33:35.378 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:35.378 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:35.378 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:35.639 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:35.639 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:35.639 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:35.900 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:35.900 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:35.900 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:36.161 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:36.161 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:36.161 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:36.422 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:36.422 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:36.422 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:36.422 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:36.682 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:36.682 08:26:07 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:33:36.682 08:26:07 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:36.682 08:26:07 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:33:36.682 08:26:07 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:33:36.682 08:26:07 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:36.682 No valid GPT data, bailing 00:33:36.682 08:26:07 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:36.682 08:26:07 -- scripts/common.sh@393 -- # pt= 00:33:36.682 08:26:07 -- scripts/common.sh@394 -- # return 1 00:33:36.682 08:26:07 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:33:36.682 08:26:07 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:33:36.682 08:26:07 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:33:36.682 08:26:07 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:33:36.682 08:26:07 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:36.682 08:26:07 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:33:36.682 08:26:07 -- nvmf/common.sh@654 -- # echo 1 00:33:36.682 08:26:07 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:33:36.682 08:26:07 -- nvmf/common.sh@656 -- # echo 1 00:33:36.682 08:26:07 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:33:36.682 08:26:07 -- nvmf/common.sh@663 -- # echo tcp 00:33:36.682 08:26:07 -- nvmf/common.sh@664 -- # echo 4420 00:33:36.682 08:26:07 -- nvmf/common.sh@665 -- # echo ipv4 00:33:36.682 08:26:07 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:36.682 08:26:07 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:33:36.682 00:33:36.682 Discovery Log Number of Records 2, Generation counter 2 00:33:36.682 =====Discovery Log Entry 0====== 00:33:36.682 trtype: tcp 00:33:36.682 adrfam: ipv4 00:33:36.682 subtype: current discovery subsystem 00:33:36.682 treq: not specified, sq flow control disable supported 00:33:36.682 portid: 1 00:33:36.682 trsvcid: 4420 00:33:36.682 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:36.682 traddr: 10.0.0.1 00:33:36.682 eflags: none 00:33:36.682 sectype: none 00:33:36.682 =====Discovery Log Entry 1====== 00:33:36.682 trtype: tcp 00:33:36.682 adrfam: ipv4 00:33:36.682 subtype: nvme subsystem 00:33:36.682 treq: not specified, sq flow control disable supported 00:33:36.682 portid: 1 00:33:36.682 trsvcid: 4420 00:33:36.682 subnqn: kernel_target 00:33:36.682 traddr: 10.0.0.1 00:33:36.682 eflags: none 00:33:36.682 sectype: none 00:33:36.682 08:26:07 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:33:36.682 08:26:07 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:36.682 08:26:07 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:36.683 08:26:07 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:36.683 08:26:07 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:36.683 08:26:07 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:33:36.683 08:26:07 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:36.683 08:26:07 -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:36.683 08:26:07 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:36.683 08:26:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:36.683 08:26:07 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:36.683 08:26:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:36.683 08:26:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:36.683 08:26:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:36.683 08:26:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:36.683 08:26:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:36.683 08:26:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:36.683 08:26:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:36.683 08:26:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:33:36.683 08:26:07 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:36.683 08:26:07 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:33:36.683 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.979 Initializing NVMe Controllers 00:33:39.979 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:33:39.979 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:33:39.979 Initialization complete. Launching workers. 00:33:39.979 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 68936, failed: 0 00:33:39.979 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 68936, failed to submit 0 00:33:39.979 success 0, unsuccess 68936, failed 0 00:33:39.979 08:26:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:39.979 08:26:10 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:33:39.979 EAL: No free 2048 kB hugepages reported on node 1 00:33:43.272 Initializing NVMe Controllers 00:33:43.272 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:33:43.272 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:33:43.272 Initialization complete. Launching workers. 00:33:43.272 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 110759, failed: 0 00:33:43.272 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27874, failed to submit 82885 00:33:43.272 success 0, unsuccess 27874, failed 0 00:33:43.272 08:26:13 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:43.272 08:26:13 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:33:43.273 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.572 Initializing NVMe Controllers 00:33:46.572 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:33:46.572 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:33:46.572 Initialization complete. Launching workers. 00:33:46.572 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 106183, failed: 0 00:33:46.572 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26546, failed to submit 79637 00:33:46.572 success 0, unsuccess 26546, failed 0 00:33:46.572 08:26:16 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:33:46.572 08:26:16 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:33:46.572 08:26:16 -- nvmf/common.sh@677 -- # echo 0 00:33:46.572 08:26:16 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:33:46.572 08:26:16 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:33:46.572 08:26:16 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:46.572 08:26:16 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:33:46.572 08:26:16 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:33:46.572 08:26:16 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:33:46.572 00:33:46.572 real 0m14.166s 00:33:46.572 user 0m8.378s 00:33:46.572 sys 0m3.314s 00:33:46.572 08:26:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:46.572 08:26:16 -- common/autotest_common.sh@10 -- # set +x 00:33:46.572 ************************************ 00:33:46.572 END TEST kernel_target_abort 00:33:46.572 ************************************ 00:33:46.572 08:26:16 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:33:46.572 08:26:16 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:33:46.572 08:26:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:46.572 08:26:16 -- nvmf/common.sh@116 -- # sync 00:33:46.572 08:26:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:46.572 08:26:16 -- nvmf/common.sh@119 -- # set +e 00:33:46.572 08:26:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:46.572 08:26:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:46.572 rmmod nvme_tcp 00:33:46.572 rmmod nvme_fabrics 00:33:46.572 rmmod nvme_keyring 00:33:46.572 08:26:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:46.572 08:26:16 -- nvmf/common.sh@123 -- # set -e 00:33:46.572 08:26:16 -- nvmf/common.sh@124 -- # return 0 00:33:46.572 08:26:16 -- nvmf/common.sh@477 -- # '[' -n 1296184 ']' 00:33:46.572 08:26:16 -- nvmf/common.sh@478 -- # killprocess 1296184 00:33:46.572 08:26:16 -- common/autotest_common.sh@926 -- # '[' -z 1296184 ']' 00:33:46.572 08:26:16 -- common/autotest_common.sh@930 -- # kill -0 1296184 00:33:46.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1296184) - No such process 00:33:46.572 08:26:16 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1296184 is not found' 00:33:46.572 Process with pid 1296184 is not found 00:33:46.572 08:26:16 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:33:46.572 08:26:16 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:49.875 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:33:49.875 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:33:49.875 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:33:49.875 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:33:49.875 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:33:49.875 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:33:49.875 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:33:49.875 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:33:49.875 0000:65:00.0 (144d a80a): Already using the nvme driver 00:33:49.875 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:33:49.875 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:33:49.875 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:33:49.875 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:33:49.875 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:33:49.875 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:33:49.875 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:33:49.875 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:33:49.875 08:26:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:49.875 08:26:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:49.875 08:26:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:49.875 08:26:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:49.875 08:26:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.875 08:26:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:49.875 08:26:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.418 08:26:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:52.418 00:33:52.418 real 0m44.158s 00:33:52.418 user 1m2.653s 00:33:52.418 sys 0m15.283s 00:33:52.418 08:26:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:52.418 08:26:22 -- common/autotest_common.sh@10 -- # set +x 00:33:52.418 ************************************ 00:33:52.418 END TEST nvmf_abort_qd_sizes 00:33:52.418 ************************************ 00:33:52.418 08:26:22 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:33:52.418 08:26:22 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:33:52.418 08:26:22 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:33:52.418 08:26:22 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:33:52.418 08:26:22 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:33:52.418 08:26:22 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:33:52.418 08:26:22 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:52.418 08:26:22 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:52.418 08:26:22 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:52.418 08:26:22 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:52.418 08:26:22 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:52.418 08:26:22 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:52.418 08:26:22 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:52.418 08:26:22 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:52.418 08:26:22 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:33:52.418 08:26:22 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:33:52.418 08:26:22 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:33:52.418 08:26:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:52.418 08:26:22 -- common/autotest_common.sh@10 -- # set +x 00:33:52.418 08:26:22 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:33:52.418 08:26:22 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:33:52.418 08:26:22 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:33:52.418 08:26:22 -- common/autotest_common.sh@10 -- # set +x 00:34:00.559 INFO: APP EXITING 00:34:00.559 INFO: killing all VMs 00:34:00.559 INFO: killing vhost app 00:34:00.559 INFO: EXIT DONE 00:34:02.563 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:34:02.563 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:34:02.563 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:34:02.563 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:34:02.563 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:34:02.563 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:34:02.824 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:34:02.824 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:34:02.824 0000:65:00.0 (144d a80a): Already using the nvme driver 00:34:02.824 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:34:02.824 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:34:02.824 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:34:02.824 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:34:02.824 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:34:02.824 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:34:02.824 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:34:02.824 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:34:06.128 Cleaning 00:34:06.128 Removing: /var/run/dpdk/spdk0/config 00:34:06.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:06.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:06.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:06.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:06.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:06.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:06.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:06.128 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:06.128 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:06.128 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:06.128 Removing: /var/run/dpdk/spdk1/config 00:34:06.128 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:06.128 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:06.128 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:06.128 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:06.128 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:06.128 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:06.128 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:06.128 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:06.128 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:06.128 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:06.128 Removing: /var/run/dpdk/spdk1/mp_socket 00:34:06.128 Removing: /var/run/dpdk/spdk2/config 00:34:06.128 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:06.128 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:06.128 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:06.128 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:06.128 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:06.128 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:06.128 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:06.128 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:06.128 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:06.128 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:06.128 Removing: /var/run/dpdk/spdk3/config 00:34:06.128 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:06.128 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:06.128 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:06.128 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:06.128 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:06.128 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:06.128 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:06.128 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:06.128 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:06.128 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:06.128 Removing: /var/run/dpdk/spdk4/config 00:34:06.128 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:06.128 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:06.128 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:06.128 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:06.128 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:06.128 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:06.128 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:06.128 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:06.128 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:06.128 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:06.128 Removing: /dev/shm/bdev_svc_trace.1 00:34:06.128 Removing: /dev/shm/nvmf_trace.0 00:34:06.128 Removing: /dev/shm/spdk_tgt_trace.pid831903 00:34:06.128 Removing: /var/run/dpdk/spdk0 00:34:06.128 Removing: /var/run/dpdk/spdk1 00:34:06.128 Removing: /var/run/dpdk/spdk2 00:34:06.128 Removing: /var/run/dpdk/spdk3 00:34:06.128 Removing: /var/run/dpdk/spdk4 00:34:06.128 Removing: /var/run/dpdk/spdk_pid1000289 00:34:06.128 Removing: /var/run/dpdk/spdk_pid1000650 00:34:06.128 Removing: /var/run/dpdk/spdk_pid1005785 00:34:06.128 Removing: /var/run/dpdk/spdk_pid1012614 00:34:06.128 Removing: /var/run/dpdk/spdk_pid1015828 00:34:06.128 Removing: /var/run/dpdk/spdk_pid1028637 00:34:06.128 Removing: /var/run/dpdk/spdk_pid1039500 00:34:06.128 Removing: /var/run/dpdk/spdk_pid1041528 00:34:06.128 Removing: /var/run/dpdk/spdk_pid1042587 00:34:06.128 Removing: /var/run/dpdk/spdk_pid1063414 00:34:06.128 Removing: /var/run/dpdk/spdk_pid1067950 00:34:06.128 Removing: /var/run/dpdk/spdk_pid1073439 00:34:06.128 Removing: /var/run/dpdk/spdk_pid1075739 00:34:06.128 Removing: /var/run/dpdk/spdk_pid1078106 00:34:06.128 Removing: /var/run/dpdk/spdk_pid1078410 00:34:06.128 Removing: /var/run/dpdk/spdk_pid1078750 00:34:06.128 Removing: /var/run/dpdk/spdk_pid1078823 00:34:06.128 Removing: /var/run/dpdk/spdk_pid1079496 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1081863 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1082848 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1083341 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1090169 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1096664 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1102702 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1147911 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1152770 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1160086 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1161577 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1163139 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1168867 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1173685 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1182750 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1182866 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1187826 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1188064 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1188378 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1188891 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1189021 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1190252 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1192354 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1194269 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1196204 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1198199 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1200218 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1207705 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1208538 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1209660 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1210943 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1217331 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1220903 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1227544 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1234292 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1241302 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1242123 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1242871 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1243567 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1244633 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1245332 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1246022 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1246716 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1251877 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1252186 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1259500 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1259711 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1262333 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1270331 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1270336 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1276337 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1278559 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1281099 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1282402 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1284850 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1286387 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1296508 00:34:06.389 Removing: /var/run/dpdk/spdk_pid1297028 00:34:06.390 Removing: /var/run/dpdk/spdk_pid1297582 00:34:06.390 Removing: /var/run/dpdk/spdk_pid1300565 00:34:06.390 Removing: /var/run/dpdk/spdk_pid1301186 00:34:06.390 Removing: /var/run/dpdk/spdk_pid1301640 00:34:06.390 Removing: /var/run/dpdk/spdk_pid830399 00:34:06.390 Removing: /var/run/dpdk/spdk_pid831903 00:34:06.390 Removing: /var/run/dpdk/spdk_pid832749 00:34:06.390 Removing: /var/run/dpdk/spdk_pid833751 00:34:06.390 Removing: /var/run/dpdk/spdk_pid834357 00:34:06.390 Removing: /var/run/dpdk/spdk_pid834744 00:34:06.390 Removing: /var/run/dpdk/spdk_pid835135 00:34:06.390 Removing: /var/run/dpdk/spdk_pid835542 00:34:06.390 Removing: /var/run/dpdk/spdk_pid835797 00:34:06.390 Removing: /var/run/dpdk/spdk_pid835979 00:34:06.390 Removing: /var/run/dpdk/spdk_pid836318 00:34:06.390 Removing: /var/run/dpdk/spdk_pid836699 00:34:06.390 Removing: /var/run/dpdk/spdk_pid838028 00:34:06.390 Removing: /var/run/dpdk/spdk_pid841385 00:34:06.390 Removing: /var/run/dpdk/spdk_pid841692 00:34:06.390 Removing: /var/run/dpdk/spdk_pid841972 00:34:06.390 Removing: /var/run/dpdk/spdk_pid842134 00:34:06.390 Removing: /var/run/dpdk/spdk_pid842515 00:34:06.390 Removing: /var/run/dpdk/spdk_pid842839 00:34:06.390 Removing: /var/run/dpdk/spdk_pid843225 00:34:06.390 Removing: /var/run/dpdk/spdk_pid843426 00:34:06.390 Removing: /var/run/dpdk/spdk_pid843633 00:34:06.390 Removing: /var/run/dpdk/spdk_pid843939 00:34:06.652 Removing: /var/run/dpdk/spdk_pid844047 00:34:06.652 Removing: /var/run/dpdk/spdk_pid844310 00:34:06.652 Removing: /var/run/dpdk/spdk_pid844747 00:34:06.652 Removing: /var/run/dpdk/spdk_pid845097 00:34:06.652 Removing: /var/run/dpdk/spdk_pid845395 00:34:06.652 Removing: /var/run/dpdk/spdk_pid845542 00:34:06.652 Removing: /var/run/dpdk/spdk_pid845684 00:34:06.652 Removing: /var/run/dpdk/spdk_pid845944 00:34:06.652 Removing: /var/run/dpdk/spdk_pid846121 00:34:06.652 Removing: /var/run/dpdk/spdk_pid846312 00:34:06.652 Removing: /var/run/dpdk/spdk_pid846648 00:34:06.652 Removing: /var/run/dpdk/spdk_pid847003 00:34:06.652 Removing: /var/run/dpdk/spdk_pid847257 00:34:06.652 Removing: /var/run/dpdk/spdk_pid847416 00:34:06.652 Removing: /var/run/dpdk/spdk_pid847714 00:34:06.652 Removing: /var/run/dpdk/spdk_pid848065 00:34:06.652 Removing: /var/run/dpdk/spdk_pid848395 00:34:06.652 Removing: /var/run/dpdk/spdk_pid848574 00:34:06.652 Removing: /var/run/dpdk/spdk_pid848775 00:34:06.652 Removing: /var/run/dpdk/spdk_pid849124 00:34:06.652 Removing: /var/run/dpdk/spdk_pid849460 00:34:06.652 Removing: /var/run/dpdk/spdk_pid849678 00:34:06.652 Removing: /var/run/dpdk/spdk_pid849842 00:34:06.652 Removing: /var/run/dpdk/spdk_pid850184 00:34:06.652 Removing: /var/run/dpdk/spdk_pid850518 00:34:06.652 Removing: /var/run/dpdk/spdk_pid850824 00:34:06.652 Removing: /var/run/dpdk/spdk_pid850958 00:34:06.652 Removing: /var/run/dpdk/spdk_pid851244 00:34:06.652 Removing: /var/run/dpdk/spdk_pid851578 00:34:06.652 Removing: /var/run/dpdk/spdk_pid851935 00:34:06.652 Removing: /var/run/dpdk/spdk_pid852114 00:34:06.652 Removing: /var/run/dpdk/spdk_pid852308 00:34:06.652 Removing: /var/run/dpdk/spdk_pid852646 00:34:06.652 Removing: /var/run/dpdk/spdk_pid852997 00:34:06.652 Removing: /var/run/dpdk/spdk_pid853266 00:34:06.652 Removing: /var/run/dpdk/spdk_pid853434 00:34:06.652 Removing: /var/run/dpdk/spdk_pid853705 00:34:06.652 Removing: /var/run/dpdk/spdk_pid854057 00:34:06.652 Removing: /var/run/dpdk/spdk_pid854393 00:34:06.652 Removing: /var/run/dpdk/spdk_pid854599 00:34:06.652 Removing: /var/run/dpdk/spdk_pid854770 00:34:06.652 Removing: /var/run/dpdk/spdk_pid855123 00:34:06.652 Removing: /var/run/dpdk/spdk_pid855472 00:34:06.652 Removing: /var/run/dpdk/spdk_pid855806 00:34:06.652 Removing: /var/run/dpdk/spdk_pid855932 00:34:06.652 Removing: /var/run/dpdk/spdk_pid856200 00:34:06.652 Removing: /var/run/dpdk/spdk_pid856542 00:34:06.652 Removing: /var/run/dpdk/spdk_pid856895 00:34:06.652 Removing: /var/run/dpdk/spdk_pid856954 00:34:06.652 Removing: /var/run/dpdk/spdk_pid857363 00:34:06.652 Removing: /var/run/dpdk/spdk_pid861894 00:34:06.652 Removing: /var/run/dpdk/spdk_pid959890 00:34:06.652 Removing: /var/run/dpdk/spdk_pid965550 00:34:06.652 Removing: /var/run/dpdk/spdk_pid977668 00:34:06.652 Removing: /var/run/dpdk/spdk_pid984223 00:34:06.652 Removing: /var/run/dpdk/spdk_pid989004 00:34:06.652 Removing: /var/run/dpdk/spdk_pid989706 00:34:06.652 Clean 00:34:06.652 killing process with pid 773733 00:34:16.686 killing process with pid 773730 00:34:16.686 killing process with pid 773732 00:34:16.686 killing process with pid 773731 00:34:16.686 08:26:47 -- common/autotest_common.sh@1436 -- # return 0 00:34:16.686 08:26:47 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:34:16.686 08:26:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:16.686 08:26:47 -- common/autotest_common.sh@10 -- # set +x 00:34:16.686 08:26:47 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:34:16.686 08:26:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:16.686 08:26:47 -- common/autotest_common.sh@10 -- # set +x 00:34:16.686 08:26:47 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:16.686 08:26:47 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:16.686 08:26:47 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:16.686 08:26:47 -- spdk/autotest.sh@394 -- # hash lcov 00:34:16.686 08:26:47 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:34:16.686 08:26:47 -- spdk/autotest.sh@396 -- # hostname 00:34:16.687 08:26:47 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:16.948 geninfo: WARNING: invalid characters removed from testname! 00:34:38.922 08:27:09 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:41.470 08:27:11 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:44.016 08:27:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:44.957 08:27:15 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:46.869 08:27:17 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:48.253 08:27:18 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:49.639 08:27:19 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:49.640 08:27:19 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:49.640 08:27:19 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:49.640 08:27:19 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:49.640 08:27:19 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:49.640 08:27:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.640 08:27:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.640 08:27:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.640 08:27:19 -- paths/export.sh@5 -- $ export PATH 00:34:49.640 08:27:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.640 08:27:19 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:49.640 08:27:19 -- common/autobuild_common.sh@435 -- $ date +%s 00:34:49.640 08:27:19 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1718087239.XXXXXX 00:34:49.640 08:27:19 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1718087239.T5xrGG 00:34:49.640 08:27:19 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:34:49.640 08:27:19 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:34:49.640 08:27:19 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:49.640 08:27:19 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:49.640 08:27:19 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:49.640 08:27:19 -- common/autobuild_common.sh@451 -- $ get_config_params 00:34:49.640 08:27:19 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:34:49.640 08:27:19 -- common/autotest_common.sh@10 -- $ set +x 00:34:49.640 08:27:20 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:34:49.640 08:27:20 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:34:49.640 08:27:20 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:49.640 08:27:20 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:49.640 08:27:20 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:34:49.640 08:27:20 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:49.640 08:27:20 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:49.640 08:27:20 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:49.640 08:27:20 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:49.640 08:27:20 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:49.640 08:27:20 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:49.640 + [[ -n 730867 ]] 00:34:49.640 + sudo kill 730867 00:34:49.652 [Pipeline] } 00:34:49.670 [Pipeline] // stage 00:34:49.676 [Pipeline] } 00:34:49.692 [Pipeline] // timeout 00:34:49.698 [Pipeline] } 00:34:49.715 [Pipeline] // catchError 00:34:49.720 [Pipeline] } 00:34:49.738 [Pipeline] // wrap 00:34:49.744 [Pipeline] } 00:34:49.761 [Pipeline] // catchError 00:34:49.771 [Pipeline] stage 00:34:49.773 [Pipeline] { (Epilogue) 00:34:49.789 [Pipeline] catchError 00:34:49.790 [Pipeline] { 00:34:49.805 [Pipeline] echo 00:34:49.807 Cleanup processes 00:34:49.813 [Pipeline] sh 00:34:50.105 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:50.105 1318440 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:50.119 [Pipeline] sh 00:34:50.406 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:50.406 ++ awk '{print $1}' 00:34:50.406 ++ grep -v 'sudo pgrep' 00:34:50.406 + sudo kill -9 00:34:50.406 + true 00:34:50.418 [Pipeline] sh 00:34:50.706 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:03.043 [Pipeline] sh 00:35:03.326 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:03.326 Artifacts sizes are good 00:35:03.340 [Pipeline] archiveArtifacts 00:35:03.348 Archiving artifacts 00:35:03.606 [Pipeline] sh 00:35:03.893 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:03.908 [Pipeline] cleanWs 00:35:03.918 [WS-CLEANUP] Deleting project workspace... 00:35:03.918 [WS-CLEANUP] Deferred wipeout is used... 00:35:03.925 [WS-CLEANUP] done 00:35:03.927 [Pipeline] } 00:35:03.946 [Pipeline] // catchError 00:35:03.958 [Pipeline] sh 00:35:04.245 + logger -p user.info -t JENKINS-CI 00:35:04.255 [Pipeline] } 00:35:04.271 [Pipeline] // stage 00:35:04.277 [Pipeline] } 00:35:04.293 [Pipeline] // node 00:35:04.299 [Pipeline] End of Pipeline 00:35:04.343 Finished: SUCCESS